Reviving the Mac Mini

My wife bought a Mac Mini toward the end of college that has been sitting in our basement pretty much since we went to Korea in 2009. I've been wanting to do something with it for a while and with Flickr changing its accounts, now seemed like a good time.

I was looking for photo sharing alternatives to Flickr, mostly because I can't afford a pro membership and I'm already over the 1000 photo limit being imposed in January. I came across [Lychee](, which is essentially single-user, self-hosted photo management. (Check out their [demo site]( - it's pretty impressive). My home photo collection could also stand being backed up somewhere more consistently, so my goal is to convert the mini into a self-hosted photo sharing site so I can continue to post out on the web and have a backup at home.

*cracks knuckles*

I set up in the dining room and got started.

I have to say, it was pretty amazing plugging in this computer, which hasn't seen use in nearly a decade, and watching it boot up as if I had used it yesterday.

Macs have [long-had web hosting built right in]( Apache and PHP are included by default and it's easy to install MySQL for databasing. I was hoping to go the easy route and just use the default tools. LOL.

Lychee requires PHP 5.5+. The mini (late 2006 model) has PHP 4.4 and Apache 1.3 installed. No good. I started searching for [ways to upgrade both](, but the recommended site with ported versions for old copies doesn't exist anymore.

So, I grabbed another Mac for more efficient Googling. There was also beer.

The best option, I think, is to boot into Linux rather than OSX 10.4. So, I started researching Debian distributions that would work on older hardware. My plan was to wipe the entire hard drive and dedicate it to server resources. When I landed on the Debian wiki, they had a page specifically for older, Intel-based Macs. This line caught my eye:

The oldest Mini (macmini1,1) is typically the most problematic, due to bugs in its firmware. When booting off CD/DVD, if there is more than one El Torito boot record on the disc then the firmware gets confused. It tried to offer a choice of the boot options, but it locks up instead.

That's not good. I have two choices: I can partition the drive to prevent losing the entire machine or I can go for it and hope that the OSX recover DVD I have in the basement still works. (I'll probably partition, just to be safe.)

Luckily, two lines later, the Debian wiki notes that specific builds are now available which only include _one_ boot record, which solves the problem. [A quick 300MB download of the Mac-specific distribution]( later and I'm ready to burn then disk image to a USB drive with [Etcher](

Next steps are to actually do the Debian install.

Pico: A Tiny Blog

More of my life is run in plain text. I don’t really use word processors any more (other than Google Docs for work) because they’re heavy and not really compatible across various devices. So, I’ve gotten in the habit of writing in a text editor (Atom, at the moment) and syncing across devices with Git or Dropbox, depending on the circumstances.

I’ve also been trying to do more with Python rather than relying on JavaScript. I don’t always have an Internet connection, and you don’t need a connection to be productive with Python.

The third element in this perfect storm was looking at my site access logs. I moved this blog to Jekyll back in December mainly because I was running out of hosting space with WordPress. I don’t really know how to do crazy database stuff like Alan or Tom, nor do I need to. I also saw a ton of failed login attempts on my WordPress site (thank goodness for strong passwords), so I decided to go databaseless with the switch. It’s hard to hack plain HTML.

This is what birthed the idea for Pico.

Pico is a tiny blogging engine written in Python that reads plaintext files.

Jekyll is great for complex site structure, but it requires the site to be regenerated (pagination updated, categories and tags indexed, etc) each time you publish a post. What if you want something smaller?

Pico is written in Flask, a templating engine written in Python. The core is similar to Jekyll: a script reads data somewhere and renders it in templated HTML. The main difference is that Pico does that when the page loads from straight text files rather than rendering the site beforehand. The idea is that you can write a post somewhere with minimal markup and frills and have the browser do most of the work. Styles are minimal and the source files are kept to a bare minimum. It even has RSS!

You can see a demo of the site if you’re curious and grab the source and see some of the technical information on GitHub.

DSC_0146 flickr photo by bennettscience shared under a Creative Commons (BY-NC-SA) license

Event Manager Web App

Adding to the list of things to blog so I don’t forget about them in the future, I built an event manager web app for prom.


Writing data to/from a spreadsheet through a webapp is faster and easier on the sales team than having them edit a spreadsheet directly. Names are pulled by matching ID numbers in a lookup database and populated in the form automatically.

When a ticket is sold, the name is added to a running list on the right side. The app also ensures no students purchase tickets more than once by comparing current sales with the submitted ID number and returns an error if the person already exists in the sales record.


Students are checked into and out of the event by the door staff. A full list is returned to the app that can then be filtered by ID number by the staff. Clicking on a number populates a card showing the student information for verification. They can then be timestamped into and out of the event by the staff.

click for a larger image

This worked well because each of our students has a unique six-digit ID number we can use to look up their information. It could be quickly repurposed to assign an ID number at registration rather than assign a name and have the manager page function the same.

Full code for the project is on this GitHub Gist.

Featurend image: Conference Queue flickr photo by Bearfaced shared under a Creative Commons (BY-NC-ND) license

Reclaiming Jekyll

In December, I moved off of WordPress to Jekyll. This is easy to do with GitHub Pages, but I wanted to self-host because keeping a SSL certificate was important to me. I followed Tim Owen’s sample had everything up and working well.

I faced two specific challenges, though.

  1. FTP and SSH uploads were blocked in several places where I normally work. That meant I needed to remember which files needed to be uploaded via cPanel every time I wanted to publish a post. I would often forget an image or have a broken link, which meant regenerating the entire site.
  2. Because SSH was blocked, I had to use a cron job to publish the post. I would set one up to run every 5 minutes while I was working to make sure the changes were correct. Then, I would delete the cron job.

The bigger issue was that building on the server duplicated the site files. So, I’d have a folder of all of my posts and assets (images, styles, etc) that would get copied into the live site. Instead of shrinking my server footprint, it was doubled. No good.

My next idea was to use git, which is preinstalled on Reclaim shared hosting (which is awesome), to manage all of my files. But, I ran into the SSH problem again (HTTPS doesn’t work out of the box with git and setting it up is a headache). I also had problems tying the folder to the Reclaim location for some reason. So, that idea was out.

I continued to think about the problem and I finally landed on it: I wanted to keep all of my files on Reclaim when I really only needed the _site directory. I can build it on my computer and then publish only the live components.

This introduced another problem: it’s more complicated than just uploading the new post. The _site directory is changed and paginated with each build, so all of the relative links have the potential to change. How would I limit my upload to the site directory without needed to build on the server?

It turns out that you can pull single directories from a GitHub repo online. The key is only checking out the directory you want. Instead of using git pull to fetch and merge everything, you break it down into several steps.

  1. Set up an empty repository using git init.
  2. Assign a remote repo via url using git remote add . So, mine is called nodes-site and maps to
  3. Fetch the entire project with git fetch nodes-site. This finds and maps the entire project to git but doesn’t actually add any files yet.
  4. Check out a single folder with git checkout nodes-site/master -- _site. This creates a read-only directory!

I don’t need to write any files on the server…I do all of that on the computer. This step just grabs what’s been published to the Github repo and displays it as a live page on

Here’s my new process:

  1. Write and build the site on the computer. It runs faster, no need for the Internet.
  2. Use git to track all of the changes and push it up to GitHub. All of the files are public already through the blog, so I don’t care that it’s available publicly there, too. In fact, it serves as a nice backup in case I really bork something up.
  3. Write the steps above as a cron job to pull the _site directory in a couple times per day. If nothing changes, no new files are copied over. If there’s been a new post, that’s reflected in Git and the entire directory restructures with the changes.

My original folder (with everything) came in around 300MB. The new folder, with only the published material, is about 180MB. So, I saved 50% of my disk space by publishing only the live pages.

This StackOverflow post got me moving in the right direction.

Featured image: Allen-Bradley Clock Tower flickr photo by kallao shared under a Creative Commons (BY-NC-ND) license

Dynamic Data Validation with Apps Script

My coworker, Kat, needed to set up meetings with our 13 elementary schools. Appointment slots would have been a lot of clicking back and forth on calendar events plus her dates were variable and subject to change. She set up a spreadsheet with each school on a row. The challenge was to set up data validation rules that would automatically update after a school made a selection.

cracks knuckles

The first thing to do was to list out all the possible dates. Updating a spreadsheet is easier than changing a validation object in Apps Script. I can also loop the sheet and make structural adjustments on the fly.

Once that list was done, we wrote a function to set validation on the sheet using the date.

Now that validation is set, we needed to listen for changes to the spreadsheet. The onOpen function has an event object which holds information about where and how the sheet was edited. This let us limit the function to changes in columns two or three only. If a cell was edited (ie the validation dropdown was changed by the user) the function gathers event data and passes it along to the workhorse.

Now we can update the validation range. Loop through the entire range of dates backwards. Once a matching row is found, delete the row. Looping backwards prevents your index from changing. We also remove validation from the edited cell at this point so it doesn’t show an error for having a value outside the validation list.

Because validation is pulled from a range, deleting the row (and the value) removes it as an option in all of the other cells! Once the sheet is finished and everyone has a date, you could even add a short loop to push dates to your Google calendar as all day events.

Here’s the full script if you want to add it to your sheet.

Featuredimage is Array of Rivets flickr photo by jim.choate59 shared under a Creative Commons (BY-NC-ND) license

Goolge Apps Script Knowledgebase Suite

This is another entry into the “fun, but probably not feasible” folder.

A colleague and I have been trying to show the importance of getting a knowledgebase set up for our teachers. It’s been a back-burner issue for a long time because:

  • we’ve never had one, so there isn’t really a pain point, and,
  • we’re trying to roll out 3,500 iPads to 9-12th grades and there are some small projects to get done with that process.

There’s a little bit of hyperbole there, but in essence, it would be nice, but not now.

Google released Hangouts Chat for GSuite users. The main reason was to compete with enterprise chat apps like Slack (but you won’t hear Google saying that directly). Hangouts Chat allows you to set up discussions with teams that are threaded based on rooms. They’ve also added an automation API so users can find, or write, bots to discussions. I got Chat turned on for our domain just to see what it was like. I was able to use Google’s code lab guide to get a simple bot up and running. It pulled open tasks for a user from our project management suite and I was able to use existing code to get the data into the bot. In all, it took me about two hours to have a working prototype.

I decided to build a helpdesk chat bot which could pull answers to common technical problems our teachers face. That ballooned into a web app to pair with the bot because…why not.

Apps Script Knowledgebase

I’ll go through some of the code, but all of it is on GitHub. It’s written in Apps Script because again, a lot of the code was preexisting so I didn’t have to start from scratch. Most of it is modified from standalone functions I have running in separate functions on other sheets. The real challenge in this task was to wrangle the user input into something usable by the script.

The database

First things first - we needed a database. I threw together a spreadsheet with several data fields that hold different types of information. I started with our platforms as a high level filter and then added some tags. The user can search for all platform-related entries or narrow them down with keyword tags.

Screenshot of the database sheet
click for a larger view

There are two additional sheets to the template: one to authenticate users and one to build a list of district platforms. Authenticated users have a button to add new articles into the database from the web app (see below). Editing a really big spreadsheet, especially when typing multi-line, HTML formatted entries, gets really cumbersome, so I wanted a way to do that through the website. Additionally, standardizing the platform options helps the user narrow searches right from the start so they can find information quickly.

The bot

Setting up the bot is covered by Google’s Code Lab, so I’m not going into detail here. The main piece of the bot is that it listens for a mention by a user, finds the information, and returns a card-formatted response.

Screenshot of a card returned to a user's question

Our thinking for this app is that someone wants a quick video on how to do something, so only videos are returned. The main challenge is getting meaningful information from the user mention in order to match database entries.

The regular expression looks for any of the keywords mentioned by the user in a string. This was easier than trying to match array values and provides consistent results. Right now, a user can send simple requests like “Google” or “Canvas” to get general results. They can also include some refining search terms, like “Google gmail” or “Canvas assignment” to narrow searches down. If there are no videos that match the search, they’re pushed over to the web app which has articles that do not include video results. More work could be done to refine how the script processes user input to improve the experience.

The script takes the string an creates a regular expression which is then used to match a string built from columns one and two of the spreadsheet. If a result is matched, the YouTube URL is returned in an array.

Once the YouTube videos are grabbed from the sheet, they’re broken into video ID strings and put through the YouTube API to get the video title and thumbnail. The bot builds a card to return to the user with suggested videos. If there aren’t any videos available, the user is given a link to the web app where they can find written articles on their topic.

The web app

We also threw together a web app to display information that isn’t in video form. The idea being that these are longer articles that have step-by-step instructions and are meant to be used in conjunction with the chat application. Lookup can be done by either passing query parameters in the URL to the app or using the sidebar form to search for keywords within a given topic.

Web app view of the knowledgebase
click for a larger view

Web apps also allow you to append queries to the published URL out of the box. If a query string is included, the app creates an object that can be passed through your script functions to build a display from a template dynamically. For instance, navigating to

takes you to this:

Web app loaded from a URL parameter string
click for a larger view

When a video isn’t returned, we’ll pass back the user string to jump right to relevant answers.


Writing logic to manipulate and understand user input that can vary wildly is very difficult. I’m relying on simple prompts (platform + search keys) to help return a video on every search. As the database grows, results will also improve. I’d like to spend more time refining the bot script to better process and use the chat messages. The regular expression approach is pretty brute-strength, but it works for most searches.

A bigger problem I discovered is that hosting a web app and a bot in the same script file causes things to break. Google makes it easy to deploy a bot from an Apps Script project. Hosting a web app is also quite easy and uses a similar ID string. What I think happens is that those ID numbers are conflicted when both are hosted from the same project. The bot’s onMessage listener wouldn’t register an event with a web app published. One solution is to host the bot in an unbound script that points to the same spreadsheet via the key. Another (probably better, in the long run) solution would be to write a web view that is hosted on a school server and access the Google Apps Script API to display database queries. More on that in this long demo.

In the knowledge economy, we need a Netflix of education - TechCrunch flickr photo by DigitalMajority shared under a Creative Commons (BY-NC-SA) license

Moving from WordPress to Jekyll

This is a long post in several parts. Jump to different sections using the links below.


Long story short, I moved from self-hosted WordPress to a static HTML site generated by Jekyll.

WordPress does it’s job really well. I think there was some statistic [citation needed] that showed nearly 30% of the Internet runs on WordPress in one form or another. That’s a lot of the Internet.

But, because of ubiquity, there is a significant drawback: WordPress sites are prime targets for malicious code and hacking. A plugin on my site shows how many dozens (and sometimes hundreds!) of login attempts there have been. It’s a battle to make sure security plugins are always up to date. That leads to other issues: incompatibility with plugins.

So, This entire blog - 2018 all the way back to 2010 - is a set of static HTML pages generated by Jekyll on my Reclaim Hosting server. No more logins, no more plugins to check and update. Just nice, clean, lightweight HTML.

It took me several weeks to work out the details for the migration. It wasn’t too bad, but I learned some things along the way that I’d like to share here.

Exporting WordPress

Jekyll uses Markdown and YAML data to generate a website. It’s quite clever how they pulled it all together, actually, to mimic a typical dynamic (database-driven) blog like WordPress. There is a plugin which will export your WordPress blog formatted for Jekyll, including all post metadata like tags, permalinks, and image resources. It gives you a .zip file which you can then extract and use to populate your new Jekyll site.

First, it extracts your entire media library. WordPress automatically generates several files for each image you’ve uploaded for different display situations. My media folder was well over 300 MB because I didn’t take the time to clean the library up. I’d suggest cleaning up any unused image files before the export.

Second, any pages you have on your site (not blog posts) get their own folder. Take time to go through each folder and make sure it’ll fit the Jekyll file structure.

Finally, do a regular WordPress XML export so you have an entire backup of your blog. The Jekyll plugin only converts posts and pages. If you have other resources, you’ll want to save them somewhere before deleting or redirecting your site.


The great thing about Jekyll is that it is ready to go with GitHub Pages. If you’re already using GitHub, you can go that route with your account with a single commit and push. I have a lot of traffic (humblebrag much?) to already and I don’t want to set up a redirect. I’m also already using my GitHub Pages site for my (web apps)[]. You can map a custom domain to GitHub Pages, but you cannot use HTTPS on that domain, which was a dealbreaker for me.

Each web host is different, so you need to see if yours supports Ruby 2.4 or higher. Lucky for me, Tim Ownes from Reclaim Hosting already had a blog post on setting it up with my Reclaim account. I followed his instructions to the letter and got it working on the second try (I borked some theme and file structures on the first, so I deleted everyting and started over).

SSL is a big deal. If you don’t know what it is, read The Six Step “Happy Path” to SSL by Troy Hunt (or anything else he writes, honestly).


I don’t get a ton of comments, but with a static HTML site, there isn’t an easy way to include comments. If you’re hosting with Github Pages, Staticman is an awesome and secure way to include comments on your posts. Another option would be to use a third-party tool like Disqus. I didn’t go with Disqus because they’ve had some trouble with clean use of user data in the past.

I decided to create a custom commenting form (based on this post) using Firebase. It’s a real-time database run by Google which can push information anywhere I want it to go. Each post has a query to the database to check for comments. Pushing the comments to the database is handled with a little bit of JavaScript, which I’ve modified from the linked tutorial:

Firebase also includes cloud functions that can be written in Node JS. I’ve never written any applications in Node, so this was a learning experience for me. This function watches the comment database and simply notifies me if a change has been made using the following script:

It could definitely use some refinement, but it does what I need it to do.


Relying on an Internet connection to write a blog post seems so 2012. With Jekyll, I can write in any text editor and then upload when it’s ready. If I’m on my main machine, I can even serve the page locally to see what the update will look like as if it were live on the web. It’s a small perk, but as I’ve moved to working more and more with text files (rather than things like Google docs) it’s nice to be able to open a blank text file and start writing. I can come back whenever I want and finish up or let it sit on the large pile of started-and-never-finished posts.


In the end, this is a highly technical shift away from something built for the end user into something I have absolute control over. If the blog breaks, it’s my fault and I will have to work to fix it, which is satisfying in its own nerdy way. It’s definitely not the easiest route to start (or continue) blogging, but it’s mine, which is fulfilling.

If you’d like to know more about how to make a switch, feel free to try out that nifty commenting section below or just write me an email: brian [at] ohheybrian [dot] com.

Featured image is Find your way flickr photo by maximilianschiffer shared under a Creative Commons (BY-ND) license

Slide Tweeter Update 4

I thought I was ready for some beta testing of the Slide Tweeter AddOn. Unfortunately, I’ve run into a snag with authenticating some of the code.

When you install and Addon from Google, it’s in something called AuthMode.NONE which significantly limits access to certain data. This is a good thing because you don’t want Addons running through your account changing things the minute you install it. Anyways, once it’s installed, you can then prompt the user to enable the Addon, which gives it access to all the necessary permissions.

I’m working on moving permissions around so it installs and adds a menu successfully before activating the Addon. It’s turning into more of a trick than I thought it would.

More updates to come…

Link Google Forms to Bitly Automatically

If you have a account, you can get a public API token which can be used to create shortcodes. This is really handy in my situation, where I’m creating a ton of feedback spreadsheets (another monster post on that later). Using a small code snippet, you can create a feedback form, throw in some code, and have it display as a short URL and QR code.

If you’re starting from scratch, create a template form and spreadsheet. When you need to make a feedback form, use File > Make a copy on the spreadsheet to copy over the code.

Otherwise, you can make a copy of this blank template to get started (code is already inserted). If you’re going to make your own, be sure you have a form linked. If there is no form on your sheet, you’ll get an error.

The code

The full source code is below. Note that there are two files: one called and one called popup.html. If you’re copying/pasting, you need to create an HTML file (File > New > Html file in the script editor) and call it ‘popup’.

In action

click for larger view

The wheels of the web keep spinning…

Cog in the wheel flickr photo by WickedVT shared under a Creative Commons (BY-NC-ND) license

Slides Tweeter Update 2

A short update to my post about tweeting Google Slides presentations after some work this weekend.

– Working AddOn menu prototyped.

– Check if a user is logged into Twitter on launch. If not, log them in.

– The user can append a hashtag to each slide. This is stored in each presentation until the user changes the input.

– The custom player now launches in a pared-down popup rather than a new tab.

– Scrolling enabled/disabled.

– Each slide is posted on advance (slide 1 posted when you move to slide 2) to buy a little time (still laggy).

click for full-size video

to do:

  1. Improve the connect to Twitter flow.

  2. Work to improve the tweet posting time.

  3. Confirmation popup when a tweet is posted successfully.

  4. Error handling when a tweet fails to post for some reason.