Server Hardening and SSL

Last night, I got [a self-hosted photo sharing site](https://photos.ohheybrian.com) up and running on my raspberry pi 3. You can see more about that process [here](https://blog.ohheybrian.com/2018/11/forget-the-mac-mini-bring-on-the-raspberry/).

Putting it on the real, live Internet is scary. Securing a server is no small task, so I took some steps based on [these tips](https://serverfault.com/questions/212269/tips-for-securing-a-lamp-server) to make sure I don't royally get myself into trouble.

(I have a stinking feeling that posting this exposes vulnerability even more, but _c'est la vie_.)

To start: new user password. Easy to do using `sudo raspi-config` and going through the menus. It's big, it's unique and no, I'm not giving any hints.

As for updating the OS, I have a cron job which runs as root to update and reboot every couple of days. Lychee is [active on GitHub](https://github.com/lycheeorg/lychee) and I've starred it so I'll get updates with releases, etc. I also took some steps to separate the Apache server from the OS.

Putting a self-hosted server online requires port forwarding. That involves opening specific ports to outside traffic. I only opened the public HTTP/HTTPS ports. Several sites say to open SSH ports, but I think that's where I feel very timid. I don't plan on running anything insanely heavy which would require in-the-moment updates from somewhere remote. (There's also the fact that my school network blocks SSH traffic entirely, so there's even less reason to enable it.)

Once the ports were open, I had to find my external IP address and update my DNS Zone records on [Reclaim Hosting](https://reclaimhosting.com). By default, Comcast assigns dynamic IP addresses so they can adjust network traffic. Most tutorials encourage users to request static IPs for home servers, but others say they've used a dynamic address for years without issue. I'll see if I can work myself up to calling.

Anyways, I logged into my cPanel and added an A record for a new subdomain: [photos.ohheybrian.com](https://photos.ohheybrian.com) that pointed to my public IP address. The router sees that traffic coming in and points it at the Raspberry Pi. I tested on my phone and, hey presto, it worked.

Opening HTTP/HTTPS ports came next. It's easy to get unencrypted traffic in and out. But, like the rest of my sites, I wanted to make sure they were all SSL by default. I could't assign a Let's Encrypt certificate through Reclaim because it wasn't hosted on their servers. [The Internet came through with another good tutorial](https://www.tecmint.com/install-free-lets-encrypt-ssl-certificate-for-apache-on-debian-and-ubuntu/) and I was off.

First, I had to enable the `ssl` package on the Apache server:

```
sudo a2enmod ssl
sudo a2ensite default-ssl.conf
sudo service apache2 restart
```

Once it can accept SSL traffic, it was time to install the Let's Encrypt package, which lives on GitHub:

```
sudo git clone https://github.com/letsencrypt/letsencrypt
```

I then had to install the Apache2 plugin:

```
sudo apt-get install python-certbot-apache
```

From there, the entire process is automated. I moved into the install directory and then ran:

```bash
cd /usr/local/letsencrypt
sudo ./letsencrypt-auto --apache -d photos.ohheybrian.com
```

It works by verifying you own the domain and then sending the verification to the Let's Encrypt servers to generate the certificate. The default life is three months, but you can also cron-job the renewal if nothing about the site is changing.

After I was given the certification, I went to https://photos.ohheybrian.com and got a 'could not connect' error, which was curious. After more DuckDuckGoing, I realized that SSL uses a different port (duh). So, Back to the router to update port forwarding and it was finished.

There are several steps I want to take, like disaggregating accounts (one for Apache, one for MySQL, one for phpMyAdmin) so if one _happens_ to be compromised, the whole thing isn't borked.

---

_Featured image is They Call It Camel Rock flickr photo by carfull...in Wyoming shared under a Creative Commons (BY-NC-ND) license _

Forget the Mac Mini, Bring on the Raspberry

This weekend I decided to try and tackle [turning a Mac Mini into a server to host my own photos](https://blog.ohheybrian.com/2018/11/reviving-the-mac-mini/). Well, that turned into a real mess and I abandoned the idea after I had to disassemble the computer to retrieve a stuck recovery DVD. We went all kinds of places together, but this Mac couldn't go the distance with me this time.

So, I grabbed the semi-used Raspberry Pi 3 that was working as a wireless musicbox on our stereo (kind of) and gave it an overhaul. I removed the [PiMusicbox OS](http://www.pimusicbox.com/) and went back to a fresh Raspbian image. (Actually, I only grabbed the Lite distribution because I won't need to go to the desktop. Ever.)

I wanted a basic LAMP (Linux - Apache - MySQL - PHP) stack to run the website, specifically because the end goal was to have [Lychee](https://github.com/LycheeOrg/Lychee) installed and running on a public space. I relied on two _really good_ tutorials to help me through the process.

The first, written by a guy named Richie, is a [full-blown, step-by-step guide](https://pchelp.ricmedia.com/setup-lamp-server-raspberry-pi-3-complete-diy-guide/) on all the software setup. He even uses WordPress as his thing-to-install-at-the-end, so that's a bonus. It's written for non-technical people and isn't just a wall of command line code to type in. He had explanations (why does he always use the `-y` flag on his install commands?) and screenshots of what to expect. Really superb. If you're looking to try setting up a local server (available only on your wifi network) or have students who want to try, this is the place to start.

Once everything was going, I went to the GitHub project and used a quick command download the package into the Pi:

`wget https://github.com/LycheeOrg/Lychee/archive/master.zip`

and then unzipped the project:

`unzip master.zip`

This put all of the source files into the `/var/www` directory on the Pi, which becomes the public space. For updates, I can just use `git pull` in the directory and I'll get those updates automatically. A cron job could even take care of that, so double bonus.

I was able to go to my internal IP address and see the setup prompt. I signed into my MySQL admin and I was off.

CC0 by Brian Bennett

Photos are organized by album and tags, so you can quickly search for items. I uploaded an old photo of my wife, just to see if it would accept files.

CC0 by Brian Bennett

There's another option I need to dig into that says "Upload from Link," but I'm not quite sure what that does yet. In the short term, I can start uploading photos here rather than Flickr.

The second article had some hints about how to get the server visible to the public. Your modem and router take a public IP address from your ISP and convert it into something you can use in the house. So, getting the Pi up with an IP address is easy to do and use, but only if you're at home. Making it publicly available requires two things:

- Some serious guts (this was the part I was most scared about)
- Some IP address and DNS work (potentially)

RaspberryPiGuy, who apparently works for RedHat, has a guide on [taking your server public](https://opensource.com/article/17/3/building-personal-web-server-raspberry-pi-3). I added a couple more packages to help with security, like fail2ban, which blocks an IP address after too many login attempts. I'm also going to split my network one more time so my home computers are sequestered from this little public slice. I found my public IP address on [this site](https://www.iplocation.net/find-ip-address) and then edited my router to forward traffic to the **public** IP (my house) to the **Pi** (the internal network IP).

I was able to use my phone on 4G to go directly to the public IP address and see public photos in my library, so mission accomplished for tonight. The next steps are to do some DNS forwarding so you don't have to memorize an IP address to see pictures. Some other considerations are to get a static IP so those DNS records don't get messed up, but I have to work up to that call to Comcast.

---

Featured image is Looking Through the Lens flickr photo by my friend, Alan Levine, shared into the public domain using Creative Commons Public Domain Dedication (CC0)

Reviving the Mac Mini

My wife bought a Mac Mini toward the end of college that has been sitting in our basement pretty much since we went to Korea in 2009. I've been wanting to do something with it for a while and with Flickr changing its accounts, now seemed like a good time.

I was looking for photo sharing alternatives to Flickr, mostly because I can't afford a pro membership and I'm already over the 1000 photo limit being imposed in January. I came across [Lychee](https://github.com/LycheeOrg/Lychee/), which is essentially single-user, self-hosted photo management. (Check out their [demo site](https://ld.electerious.com/) - it's pretty impressive). My home photo collection could also stand being backed up somewhere more consistently, so my goal is to convert the mini into a self-hosted photo sharing site so I can continue to post out on the web and have a backup at home.

*cracks knuckles*

I set up in the dining room and got started.

I have to say, it was pretty amazing plugging in this computer, which hasn't seen use in nearly a decade, and watching it boot up as if I had used it yesterday.

Macs have [long-had web hosting built right in](http://www.macinstruct.com/node/112). Apache and PHP are included by default and it's easy to install MySQL for databasing. I was hoping to go the easy route and just use the default tools. LOL.

Lychee requires PHP 5.5+. The mini (late 2006 model) has PHP 4.4 and Apache 1.3 installed. No good. I started searching for [ways to upgrade both](https://jeromejaglale.com/doc/mac/upgrade_php5_tiger), but the recommended site with ported versions for old copies doesn't exist anymore.

So, I grabbed another Mac for more efficient Googling. There was also beer.

The best option, I think, is to boot into Linux rather than OSX 10.4. So, I started researching Debian distributions that would work on older hardware. My plan was to wipe the entire hard drive and dedicate it to server resources. When I landed on the Debian wiki, they had a page specifically for older, Intel-based Macs. This line caught my eye:

The oldest Mini (macmini1,1) is typically the most problematic, due to bugs in its firmware. When booting off CD/DVD, if there is more than one El Torito boot record on the disc then the firmware gets confused. It tried to offer a choice of the boot options, but it locks up instead.

That's not good. I have two choices: I can partition the drive to prevent losing the entire machine or I can go for it and hope that the OSX recover DVD I have in the basement still works. (I'll probably partition, just to be safe.)

Luckily, two lines later, the Debian wiki notes that specific builds are now available which only include _one_ boot record, which solves the problem. [A quick 300MB download of the Mac-specific distribution](https://cdimage.debian.org/pub/debian-cd/current/amd64/iso-cd/) later and I'm ready to burn then disk image to a USB drive with [Etcher](https://github.com/balena-io/etcher).

Next steps are to actually do the Debian install.

Reading Summaries

I've decided to fire up a new, static website to reflect on books I read this year. In all honesty, much of this is bring prompted by my grad school reading, but my reading list is also expanding for once classes are done and putting longer pieces together in response have helped.

Anyways, I have two up right now:

1. [Deep Learning](https://sum.ohheybrian.com/books/deep-learning.html) by Michael Fullan, Joanne Quinn, and Joanne McEachen
2. [Poor Students, Richer Teaching](https://sum.ohheybrian.com/books/poor-rich.html) by Eric Jensen.

All books will be listed on the homepage, [sum.ohheybrian.com](https://sum.ohheybrian.com).

Killing Pico (and Micro)

Earlier this year, I wrote up a really bare-bones blogging system I called [Pico](https://blog.ohheybrian.com/2018/04/pico-a-tiny-blog/). It's a Python app that reads plaintext files and publishes them to a website.

Well, something in serverland changed and now they're not loading. My experience with server configuration is nil, so I'm going to proclaim those two sites dead. It makes me a little sad, but they were also just proof of concept.

[The code lives on GitHub](https://github.com/bennettscience/pico). If you're someone who likes to poke, give a poke over there.

---
_Featured image is [Cemetery](https://flickr.com/photos/hpeake/37430662781 "Cemetery") flickr photo by [Fenrir Wolfy](https://flickr.com/people/hpeake) shared under a [Creative Commons (BY-NC-ND) license](https://creativecommons.org/licenses/by-nc-nd/2.0/)_

Pico: A Tiny Blog

More of my life is run in plain text. I don’t really use word processors any more (other than Google Docs for work) because they’re heavy and not really compatible across various devices. So, I’ve gotten in the habit of writing in a text editor (Atom, at the moment) and syncing across devices with Git or Dropbox, depending on the circumstances.

I’ve also been trying to do more with Python rather than relying on JavaScript. I don’t always have an Internet connection, and you don’t need a connection to be productive with Python.

The third element in this perfect storm was looking at my site access logs. I moved this blog to Jekyll back in December mainly because I was running out of hosting space with WordPress. I don’t really know how to do crazy database stuff like Alan or Tom, nor do I need to. I also saw a ton of failed login attempts on my WordPress site (thank goodness for strong passwords), so I decided to go databaseless with the switch. It’s hard to hack plain HTML.

This is what birthed the idea for Pico.

Pico is a tiny blogging engine written in Python that reads plaintext files.

Jekyll is great for complex site structure, but it requires the site to be regenerated (pagination updated, categories and tags indexed, etc) each time you publish a post. What if you want something smaller?

Pico is written in Flask, a templating engine written in Python. The core is similar to Jekyll: a script reads data somewhere and renders it in templated HTML. The main difference is that Pico does that when the page loads from straight text files rather than rendering the site beforehand. The idea is that you can write a post somewhere with minimal markup and frills and have the browser do most of the work. Styles are minimal and the source files are kept to a bare minimum. It even has RSS!

You can see a demo of the site if you’re curious and grab the source and see some of the technical information on GitHub.


DSC_0146 flickr photo by bennettscience shared under a Creative Commons (BY-NC-SA) license

Slides Tweeter Update 2

A short update to my post about tweeting Google Slides presentations after some work this weekend.

– Working AddOn menu prototyped.

– Check if a user is logged into Twitter on launch. If not, log them in.

– The user can append a hashtag to each slide. This is stored in each presentation until the user changes the input.

– The custom player now launches in a pared-down popup rather than a new tab.

– Scrolling enabled/disabled.

– Each slide is posted on advance (slide 1 posted when you move to slide 2) to buy a little time (still laggy).

click for full-size video

to do:

  1. Improve the connect to Twitter flow.

  2. Work to improve the tweet posting time.

  3. Confirmation popup when a tweet is posted successfully.

  4. Error handling when a tweet fails to post for some reason.

Badging Patterns

Some more, mostly unfiltered thoughts on badging programs run at the K12 level. (Initial thoughts for some background from back in August.)

– Articles and websites announcing badge initiatives at K12 peaked in 2014-2015. I haven’t found many articles from the last two years.

– Many (seem to have) started with schools who had a high level of teacher buy-in for PD to begin with. Building the drive for development took place before badges were introduced.

– Most of the programs started as a way to (seemingly) expose teachers to different software and programs they can use.

– Very few of the programs required evidence of implementation along side reflection on implementation. Most implementation evidences were photos or videos of you using the app/program/thing with students.

– No site talks about benefits for completion other than being given a [adjective] digital sticker!

I’m not convinced badging/credentialing is a bust. I’m more convinced that programs that offer long-lasting value for teaching staff are elusive and take careful planning. It’s also apparent that consistent implementation through support and updated offerings is difficult. Having a staff who is able to meet the shifting needs of a district over multiple years is key. It’s also going to be important to have a very clear mechanism for evaluation of change in instruction because that’s the component that benefits students.

_Featured image is by

<br />

Melinda Martin-Khan on Unsplash_

Parsing a JSON log feed with Python

I have several Google Sheets doing several things on their own through Google Apps Script. I’ve started to make it a habit that each action is logged to a separate, isolated spreadsheet so I can pop in and look for error messages in one places rather than several.

This poses a small problem. I have to actually remember to open that sheet. Usually, something goes wrong, and then I remember to check the logs. I wanted to have something more up to date that I could glance at without too much effort.

You can get Google Sheet data as JSON which is handy in a number of contexts (here and here are two examples from my own work). It’s not as straightforward as tagging .json on the end of the URL (though that would be sweet) but the process isn’t hard. To get the data, this post details how to publish your sheet and find the feed.

Once the dataset was live online and updating regularly, I needed to decide how to get it. I use GeekTool on my desktop so I decided to use a Python script and the Responses library to gather and process the feed.

I put this into a Geeklet on my desktop and voila!

Give it a try with your own sheet. You can run it in your terminal to get a printout of the last 5 entries of the sheet. The JSON output from Google is really weird, so it helps to put it into a prettifier to make it more readable before parsing keys.

What did I miss? What would you do differently?

Featured image, Logs, flickr photo by CIFOR shared under a Creative Commons (BY-NC-ND) license

Getting Busy with todo.txt

Keeping a todo list has always been a struggle for me. I have a notebook, I have some stuff in Google Keep, I’ve tried Google Tasks, Any.do, Remember the Milk, and Wunderlist. I’ve tried apps with goal setting, apps with reminders, and apps with synced web and phone apps.

My main problem is that it’s just another window to have open at any given point. I run a minimal desktop and dock – I don’t like windows all over the place with a dock that needs two monitors to show everything.

Robert Talbert tweeted about todo.txt the other day and I fell in love. Instantly.

I work in code and the command line a lot. I usually have the Mac Terminal open along with Atom. I hop between projects and use key commands because it’s faster than moving my hands to and from the trackpad. Todo.txt is a command-line script which manages a text file called (wait for it) todo.txt. It also has a done.txt file to hold things that are finished.

I won’t go into the syntax here, but essentially, each item has a priority, a title, and a tag or two. Putting one task per line in the text file creates one new entry on your list.

The command line just makes it faster to work with because it gives built-in commands, much like the Git architecture, to create, edit, or mark tasks as done. For example, using todo add “(A) filter schools by content areas +curriculum” creates an item in the text file. Opening the file shows a single line. But, using the command line, it shows a sorted list of items based on priority.

Marking items is easy, too. Put an “x” in front of the task and – hey presto! – it’s marked as finished. Using the command line, todo do and the item number marks it as done and moves it to your done.txt file.

Putting the notes in your Dropbox is tiny and updates instantly. Any computer I use is synced to Dropbox, hence my list is synced. Pull up the Terminal, and I’m ready to go.

Then I found GeekTool.

Getting rid of window clutter even more, GeekTool takes scripts (called “geeklets”) and lays them on your desktop in modules you customize. You can set the refresh rate as well as define groups of geeklets for different setups. For instance, I have a geeklet set to use when I’m on my laptop and one for when I’m hooked to an external monitor.

Part of my GeekTool setup is displaying my task list right on my desktop. Having a color coded list right there, all the time, is really helpful. And I’m not promoting an unhealthy work/life ratio because I can turn the geeklet off whenever I want (namely, on the weekend).

In the end, I’m really happy with this system because I control it. The setup isn’t as easy as downloading an app, but I know where my data is and I know how it’s being used. It’s a little thing, but some of the best things in life are the little things.

All images in the post are mine. There are a lot of great tutorials online for todo.txt as well as GeekTool.