Badging Patterns

Some more, mostly unfiltered thoughts on badging programs run at the K12 level. (Initial thoughts for some background from back in August.)

– Articles and websites announcing badge initiatives at K12 peaked in 2014-2015. I haven’t found many articles from the last two years.

– Many (seem to have) started with schools who had a high level of teacher buy-in for PD to begin with. Building the drive for development took place before badges were introduced.

– Most of the programs started as a way to (seemingly) expose teachers to different software and programs they can use.

– Very few of the programs required evidence of implementation along side reflection on implementation. Most implementation evidences were photos or videos of you using the app/program/thing with students.

– No site talks about benefits for completion other than being given a [adjective] digital sticker!

I’m not convinced badging/credentialing is a bust. I’m more convinced that programs that offer long-lasting value for teaching staff are elusive and take careful planning. It’s also apparent that consistent implementation through support and updated offerings is difficult. Having a staff who is able to meet the shifting needs of a district over multiple years is key. It’s also going to be important to have a very clear mechanism for evaluation of change in instruction because that’s the component that benefits students.

_Featured image is by

<br />

Melinda Martin-Khan on Unsplash_

Parsing a JSON log feed with Python

I have several Google Sheets doing several things on their own through Google Apps Script. I’ve started to make it a habit that each action is logged to a separate, isolated spreadsheet so I can pop in and look for error messages in one places rather than several.

This poses a small problem. I have to actually remember to open that sheet. Usually, something goes wrong, and then I remember to check the logs. I wanted to have something more up to date that I could glance at without too much effort.

You can get Google Sheet data as JSON which is handy in a number of contexts (here and here are two examples from my own work). It’s not as straightforward as tagging .json on the end of the URL (though that would be sweet) but the process isn’t hard. To get the data, this post details how to publish your sheet and find the feed.

Once the dataset was live online and updating regularly, I needed to decide how to get it. I use GeekTool on my desktop so I decided to use a Python script and the Responses library to gather and process the feed.

I put this into a Geeklet on my desktop and voila!

Give it a try with your own sheet. You can run it in your terminal to get a printout of the last 5 entries of the sheet. The JSON output from Google is really weird, so it helps to put it into a prettifier to make it more readable before parsing keys.

What did I miss? What would you do differently?

Featured image, Logs, flickr photo by CIFOR shared under a Creative Commons (BY-NC-ND) license

Getting Busy with todo.txt

Keeping a todo list has always been a struggle for me. I have a notebook, I have some stuff in Google Keep, I’ve tried Google Tasks, Any.do, Remember the Milk, and Wunderlist. I’ve tried apps with goal setting, apps with reminders, and apps with synced web and phone apps.

My main problem is that it’s just another window to have open at any given point. I run a minimal desktop and dock – I don’t like windows all over the place with a dock that needs two monitors to show everything.

Robert Talbert tweeted about todo.txt the other day and I fell in love. Instantly.

I work in code and the command line a lot. I usually have the Mac Terminal open along with Atom. I hop between projects and use key commands because it’s faster than moving my hands to and from the trackpad. Todo.txt is a command-line script which manages a text file called (wait for it) todo.txt. It also has a done.txt file to hold things that are finished.

I won’t go into the syntax here, but essentially, each item has a priority, a title, and a tag or two. Putting one task per line in the text file creates one new entry on your list.

The command line just makes it faster to work with because it gives built-in commands, much like the Git architecture, to create, edit, or mark tasks as done. For example, using todo add “(A) filter schools by content areas +curriculum” creates an item in the text file. Opening the file shows a single line. But, using the command line, it shows a sorted list of items based on priority.

Marking items is easy, too. Put an “x” in front of the task and – hey presto! – it’s marked as finished. Using the command line, todo do and the item number marks it as done and moves it to your done.txt file.

Putting the notes in your Dropbox is tiny and updates instantly. Any computer I use is synced to Dropbox, hence my list is synced. Pull up the Terminal, and I’m ready to go.

Then I found GeekTool.

Getting rid of window clutter even more, GeekTool takes scripts (called “geeklets”) and lays them on your desktop in modules you customize. You can set the refresh rate as well as define groups of geeklets for different setups. For instance, I have a geeklet set to use when I’m on my laptop and one for when I’m hooked to an external monitor.

Part of my GeekTool setup is displaying my task list right on my desktop. Having a color coded list right there, all the time, is really helpful. And I’m not promoting an unhealthy work/life ratio because I can turn the geeklet off whenever I want (namely, on the weekend).

In the end, I’m really happy with this system because I control it. The setup isn’t as easy as downloading an app, but I know where my data is and I know how it’s being used. It’s a little thing, but some of the best things in life are the little things.

All images in the post are mine. There are a lot of great tutorials online for todo.txt as well as GeekTool.

Update to Endnote Generator

If you’ve used Endnote Generator in your Docs, thanks.

I posted an update this evening which will retain the formatting of your footnotes. So, if you have text formatting (bold, underline, etc) or links, those are still there when the Endnotes are added.

You can grab the Addon from the Docs Addon menu. You can also take a look at the source (or contribute!) and test it out on a single document on GitHub.

Badges? We Might Want to Think About Stinking Badges…

Some initial, mostly unfiltered thoughts on digital badges from a professional development perspective:

  1. Most badging programs are too simple. They focus on rote skill and don’t have a clear pathway for building competency on a holistic level.

  2. Buy in, like any other initiative, is extremely important. Badging isn’t enticing on it’s own. And being enticing for the sake of being enticing, is a really bad reason to tackle a large project.

  3. In designing a meaningful program, outlining desired outcomes needs to happen before competencies are even discussed. Aligning tasks and work for the participants will only happen if you know what you want them to get out of the program.

  4. Credentialing has to have weight behind it. This comes either from the organization or the privileges and benefits that come from earning the credential. This can be at the department level certainly, but becomes more meaningful if the institution shifts to recognize micro-credentials.

  5. Displaying the credential needs to be simple.

I’m not entirely sure where this is going to go, but there it is.

Some helpful reading:

Developing a Higher Education Badging Initiative

Digital Badges as Curricular Building Blocks

Open Badges specifications

2017 Project: Photo Year

Pictures launch stories. I take a lot of photos and like most people, they stay on my phone. I used to use Instagram, but I’m not happy with their terms or use limitations on photos (ever tried to embed an image? It’s a nightmare). Maybe I’m an idealist and this is a funk, but whatever.

I’m posting to Flickr more and more regularly and I decided to make a small project for 2017. I’m going to tag a photo each day that will push it and the description over to a new blog I’m calling The Photoyear. It’ll syndicate that photo (technical stuff below) and turn it into a blog post. You can subscribe via RSS over there if you’d like. Sometimes, I’ll cross post it here, but that will be a place for pictures and their stories for the next year.

Thanks, as always, for reading.


Setting up Flickr syndication

I started by writing more descriptive…descriptions…with my photos. That led to the idea of running a blog entirely on photos – content and all. Since I’m already doing that on Flickr, it made sense to try and syndicate it back to a blog of some kind. Alan Levine is the king of all things RSS. Some of his posts led to working solutions.

Recently, Alan posted where to find the Flickr RSS feed for public photos. Instead of grabbing everything I post (often more than one photo per day) I wanted to grab just one. I was going to manage that by adding photos to an album, but you can’t do that anymore (not with an RSS URL, anyways). Sad trombone. So, I stick to tags.

I fired up a subdomain – photoyear.ohheybrian.com and installed WordPress and the FeedWordPress plugin to syndicate posts from any RSS feed. Running out of the box, it grabs the tag only from the XML:

The immediate problem is the size of the embedded image. The src attribute is https://farm1.staticflickr.com/543/31231759933_ba613deec1_m.jpg, meaning the medium sized image is embedded, which is tiny. I had to get brave and check out a PHP solution.

In functions.php, I added a new filter (thanks to the suggestion of…you guessed it…Alan) which simply changes the _m.jpg to _b.jpg for a nice, full-width image.

I’m still not totally happy with my PHP solution because each post is going to come with, “a new photo by bennettscience” appended at the top. I tried using a preg_replace function to find the string and remove it (it’s always the same), but I didn’t have any luck. If you have a suggestion, I’d love some help figuring that snippet out.

Anyways, all said and done, you can check out The Photoyear now and then to see what’s up. I’m looking forward to reading back over everything as we move through 2017.

Introducing: Endnote Generator Add On

A year ago, I posted a script which converts footnotes in a Google Doc into endnotes. I’ve gotten great comments and feedback and have made the standalone script better, which is still available.

The Endnote Generator Add-On is now available in the Chrome Web Store for Docs. Install it once and you’ll be able to create endnotes from the footnotes you’ve already inserted in your doc.

There are a couple of known issues (pictures with a footnote aren’t included yet), but if you run into anything, leave a note here or on GitHub.

Check out the Endnote Generator.

Tracking Missing IDs with a Google Sheet

Like many schools, our students are asked to wear their student ID’s when they come into the building each day. During the day, they don’t need to have them on…just in the morning so we can make sure people coming in are part of our student body. If a student doesn’t have their ID, we issue a temporary and track how many times they don’t have one when they come to school. This process used to be done by hand each morning. A queue would form and a teacher would write down ID numbers (they all have them memorized) and names. This caused several problems:

  1. If a student gave a fake ID number, office staff wouldn’t know until much later in the day.

  2. The paper copy of the ID list was given to a secretary to transcribe into a spreadsheet.

  3. Transcribing meant looking up the ID in our SIS and then manually entering the number of times the student didn’t have their ID.

  4. When benchmarks were hit (5, 8, 11), disciplinary action was issued and followed up on by an assistant principal.

I spoke with the head secretary and we worked out a custom Google Sheet to do all of this automatically. Now, the duty station is equipped with a Chromebook so the teacher issuing IDs can quickly check veracity of the given ID and have all of the tally work done. This (mega) post outlines the sheet structure and custom code used to do the work.

The template and source are linked at the bottom of the post.

The Sheet

The Google Sheet is split into four tabs:

  1. Raw: Blank sheet with Timestamp, ID, and Name.

  2. Lookup: database of student ID numbers and the associated name/grade.

  3. Aggregate: ID, Name, Grade, Count, Cons 1, Cons 2, Cons 3.

  4. Daily groups. Filter, ID, Name, Grade.

Raw Input

This is used by the duty station. The date and student ID numbers are entered as students are given temporary bands. The Name column is populated by an ID lookup using =index(lookup!B:B, match(B2,lookup!A:A)). Both VLOOKUP and INDEX/MATCH are notorious for mistakes with large data sets, which is why the lookup tab is filtered by ID number and not alphabetically.

Aggregate

The same student can have multiple dates of entry and this sheet aggregates by student ID. Using UNIQUE, it pulls each ID as a single column. Then, I used another index-match function to populate the name and grade. To count the number of missing ID instances, a COUNTIF function worked well to count the number of times the ID number shows up in the raw sheet.

Scripting

At this point, much of the work of the sheet was moved over to Apps Scripts. We needed some dynamic results and using a script to sort through the volume of information was much faster and more reliable than using regular Sheets functions. I’m breaking the code into chunks to better explain the purpose. In the actual sheet, all functions are in the same file.

Function 1: Globals and menu item

I use a couple global variables (not super efficiently, though). I also create a custom menu to run sheet functions. This takes care of creating those items. I have mine at the top of the script, but they can go anywhere.

Function 2: Listing dates a student was missing the ID

For reporting, it was helpful to know which dates a student was missing an ID. Rather than adding n columns to fill with dates, a script was used to look up the student ID number and then add a note to the cell with the dates.

Function 3: Populating a daily list of students

Each day, the secretary checks for students who have hit a benchmark: 5, 8, or 11 missing IDs. Searching through the aggregate list isn’t feasible, so a script does the search and then returns the results dynamically.

This checks two conditions: A) The number of missing IDs is equal to or greater than the target and B) there is no consequence filled in the appropriate column. If the consequence has been assigned, it’s in the SIS and doesn’t need to be entered by the secretary.

Function 4: Assigning consequences

When the office staff pulled the daily list, they go into our SIS and update the disciplinary action or other notes. They would still have to go back and document that consequence being assigned in the aggregate sheet. Rather than scroll through the list, the appropriate consequence for the target is now filled in when the list is generated. The daily list isn’t cleared until the script is run again with a new target.

If, by chance, there are no students to assign a consequence to, a popup is shown to let the user know that no students meet the criteria.

The Result

In the end, we’ve removed two steps from the administrative process, but they were the most time-intensive steps. Rather than looking each student up and then remarking a spreadsheet, the staff member needs to simply pull the list of students for that day.

Computers are great at repetitive tasks, so let’s use them to do those tasks. There’s definitely some optimization that can be done, especially in the last two functions as they pass those arrays around like a cold. If you make updates, please comment and share back. The code is hosted on GitHub, so you can fork and update as much as you’d like.

You can look at a copy of the template or just make a copy for yourself and start poking away.

The entire source (not broken up) is hosted on GitHub Gists.

Turning an Old Laptop into a Video Kiosk

My father-in-law came to me with an interesting idea. He wanted to create video kiosk for our church which would play videos on different mission organizations we’re involved with. The wall – previously – had photos and text about each missionary or organization, but he wanted to revamp.

We initially tried to use PowerPoint and a custom keyboard to jump to different slides, but maintaining and updating that system wasn’t going to be very elegant or user friendly. So, about a year later, I had an “oh, duh” moment and realized we could do it as a static, locally-hosted website. It would be easy to build and even easier to maintain, so that’s what we did.

In this post, I’ll talk about the hardware and software we used to get it finished. There are still some things to hammer out, but the bones of the project are done and tested successfully, so it seems like a good time to document it.

The Hardware

Our initial idea was to use a Raspberry Pi 3 to do everything because of it’s low price point and small size. Unfortunately, the RPi, in general, doesn’t handle web video too well. I looked into using the kweb minimal web browser, which hooks into the native OMX Video Player on the Pi, but you can’t customize the look and playing full screen video had lots of menus. In the end, it was turning into a huge job to make it look polished, so we moved away from the Pi.

My brother-in-law had an old HP laptop that had died when he tried to update it to Windows 8 (insert snarky Microsoft joke here). So, he donated it to the cause.

I wiped the hard drive and installed Ubuntu Linux 16.04 LTS. It’s pretty lightweight and gets consistent updates. It’s also really user-friendly in case there is a problem with the laptop, so one of the office assistants can troubleshoot if I can’t make it. I also chose to stick with Linux because I can use SSH to log in via my Chromebook on Sunday mornings and run updates remotely if I need to.

flickr photo shared by bennettscience under a Creative Commons ( BY ) license

You could definitely argue that running a full desktop environment for a simple web kiosk slows the machine and introduces a bunch of variables that could cause things to go wrong, which is 100% accurate. OMG! Ubuntu! has a good article on how to either convert a full machine to a dedicated kiosk or how to build one from scratch, but since I didn’t find the article until we were almost done, I decided not to go back and rework everything.

For user interaction, we grabbed an Intuos Art Small tablet from Wacom for $100. It’s seated in a wall mount to lock it in place and hide the wires. Essentially, it’s a giant touchpad.

flickr photo shared by bennettscience under a Creative Commons ( BY ) license

Finally, we bought a 55″ wall mounted TV. The laptop had an HDMI port, so that took care of high-definition video and audio.

The Software

I built the page with plain HTML and JavaScript. It’s currently being hosted locally on the machine to ensure smooth video with no buffering. I’m planning on testing the broadband rates via ethernet next time in church because over wifi we ran into issues. If I can get a good download rate, I’ll switch the site over to GitHub Pages so I can update remotely.

Source

The HTML and CSS is pretty standard. I didn’t want a ton of bloat, so I coded everything from scratch. You can take a look at the markup on GitHub. There’s also a live example so you can see how it’s rendered.

First, this is a hallway display. There will probably be times where people aren’t watching videos, in which case I want to avoid burning an image into the screen of the TV. I added a small jQuery function to bring up a prompt if no one touches the trackpad for 30 seconds. This also turned out to be handy because a lot of people walked up to the tv and tried touching it directly rather than using the trackpad input.

To play the videos, I wanted each container to reference a hidden video div. I use jQuery to handle the element selection and JavaScript to pay attention to the play state. When a user clicks the tile, a fullscreen video starts playing. There is no keyboard for them to quit out of the video, so I don’t worry about keypress events. If they jump out of fullscreen using the playback controls, it saves the video location.

Ubuntu tweaks

There were also some software tweaks I needed to make on the machine itself.

I wanted a standard user to log in automatically. So, I created a generic user on the system and dropped the source files onto the desktop (more on that in a minute). Theoretically, the user will never get out of Chrome because there’s no keyboard available. When the computer boots, it logs into the generic user right away.

Then, I edited the Startup Applications option. You can launch Chrome from the Terminal and you can specify which command to use in the settings. Using:

chromium-browser –kiosk [URL]

launches Chrome in the full screen kiosk mode and displays the website immediately after login.

The laptop is mounted on the wall behind the TV. Ubuntu wasn’t recognizing the monitor when the lid was closed. There is a flag in etc/systemd/logind.conf that handles a dock, but we weren’t using one. So, I had to change the HandleLidSwitch flag to ignore to ignore the lid being closed (thanks to this answer)

Finally, because the laptop is mounted behind the TV on tracks with a padlock, it’s a pain to take it out to turn it on and off. I was able to automate the daily shutdown pretty easily by specifying a time using crontab -e (you have to be root to shut down). Startup was harder.

After some research, I found that most computers have something called a Real Time Clock (RTC) synced with UTC. It can be used to set an alarm to wake the computer. You can test this by setting the clock to zero with:

sudo echo 0 > /sys/class/rtc/rtc0/wakealarm

and then resetting it with:

sudo echo `date '+%s' -d '+ 10 minutes'` > /sys/class/rtc/rtc0/wakealarm

Now that I knew the computer would turn itself back on, I could create a simple bash script to run with cron that would handle startup and shutdown daily:

I stored the file in /usr/bin and made it executable with chmod +x.

Then, I edited crontab -e to run the script daily. Note that this specifies the shutdown time. At 8 PM every day, the computer will shut down. The shutwake script resets the RTC alarm:

0 8 * * * * /usr/bin/shutwake

cron can be picky, so if you need more help, this article helped a lot.

The last thing we needed to work out was muting the audio during sermons so someone didn’t crank out a video in the middle of church. The video will still play with captions (accessibility FTW) and muting the audio turned out to be not too bad. I can toggle the pulse audio driver in Ubuntu with a simple cron job that runs on Sundays at 9:00 and 12:00 to turn the audio on and off:

0 9 * * * 0 amixer -q -D pulse sset Master toggle

flickr photo shared by bennettscience under a Creative Commons ( BY ) license

In the end, I’m really happy with how it turned out. Remote management and simple setup led to a really effective display for the wall.

If you want more specifics about software or construction, leave a comment.

Hacking Together an Auto-Tweeting Spreadsheet

A while back, I had looked at automating tweets from a Google spreadsheet to reduce the insane number of clicks it takes to do in TweetDeck and HootSuite (5 clicks? Really?) I hit some roadblocks and let it slide because in the long run, it wasn’t really important to me. More of a fun experiment.

I jumped back into it a week or so back to try and solve the last little problems. [I was able to create a script](https://docs.google.com/spreadsheets/d/1kbFIfYGm2sGQJ5TistkaxMzqUqQ8HbgUvJUM8MohOb4/edit?usp=sharing) which loops through a spreadsheet checks the current date and whether or not the tweet has been sent. If those conditions are met (`TODAY` and `NOT SENT`), it will automatically post the tweet.

The sheet, like all the other Twitter sheets I’ve used, is run with [Martin Hawksey’s](http://twitter.com/mhawksey) fantastic TwtrService library. It allows you to authenticate and tweet right from Google Apps Script and saves a _ton_ of time.

I ran into a problem that is [as-yet unsolved](https://stackoverflow.com/questions/37308911/pause-a-loop-to-wait-for-rest-api-response): I can’t get the sheet to stop after posting one tweet. So, if you have multiple tweets on a given day, it will send _all_ of them at once. That’s not good, especially if you’re promoting an event over a period of time. I’ve tried a number of solutions, but I can’t seem to find one that works. I’d love to hear if you’re able to take the source and tweak it to work.

In the meantime, Martin also took a (much more elegant) pass at the task. [His sheet is also available](https://docs.google.com/spreadsheets/d/10U7Rrr7lfbRS2A8QYRUWL8enlJfat75-QuGm7slKXRE/edit#gid=0) and works really well. The goal is the same, but his mechanics and implementation are much more refined and effective.

It’s a good example of multiple ways to skin a cat. I’m a novice coder (I tell people I know enough to break something) and he’s an expert doing all kinds of things. The great thing is, all of this code is open and available. I can make a copy of Martin’s page and dig into his solution. I learned a few tricks about checking for multiple conditions, which is what I was struggling with. I became better at scripting through my failure and his success.