Update to Endnote Generator

If you’ve used Endnote Generator in your Docs, thanks.

I posted an update this evening which will retain the formatting of your footnotes. So, if you have text formatting (bold, underline, etc) or links, those are still there when the Endnotes are added.

You can grab the Addon from the Docs Addon menu. You can also take a look at the source (or contribute!) and test it out on a single document on GitHub.

Badges? We Might Want to Think About Stinking Badges…

Some initial, mostly unfiltered thoughts on digital badges from a professional development perspective:

  1. Most badging programs are too simple. They focus on rote skill and don’t have a clear pathway for building competency on a holistic level.

  2. Buy in, like any other initiative, is extremely important. Badging isn’t enticing on it’s own. And being enticing for the sake of being enticing, is a really bad reason to tackle a large project.

  3. In designing a meaningful program, outlining desired outcomes needs to happen before competencies are even discussed. Aligning tasks and work for the participants will only happen if you know what you want them to get out of the program.

  4. Credentialing has to have weight behind it. This comes either from the organization or the privileges and benefits that come from earning the credential. This can be at the department level certainly, but becomes more meaningful if the institution shifts to recognize micro-credentials.

  5. Displaying the credential needs to be simple.

I’m not entirely sure where this is going to go, but there it is.

Some helpful reading:

Developing a Higher Education Badging Initiative

Digital Badges as Curricular Building Blocks

Open Badges specifications

2017 Project: Photo Year

Pictures launch stories. I take a lot of photos and like most people, they stay on my phone. I used to use Instagram, but I’m not happy with their terms or use limitations on photos (ever tried to embed an image? It’s a nightmare). Maybe I’m an idealist and this is a funk, but whatever.

I’m posting to Flickr more and more regularly and I decided to make a small project for 2017. I’m going to tag a photo each day that will push it and the description over to a new blog I’m calling The Photoyear. It’ll syndicate that photo (technical stuff below) and turn it into a blog post. You can subscribe via RSS over there if you’d like. Sometimes, I’ll cross post it here, but that will be a place for pictures and their stories for the next year.

Thanks, as always, for reading.


Setting up Flickr syndication

I started by writing more descriptive…descriptions…with my photos. That led to the idea of running a blog entirely on photos – content and all. Since I’m already doing that on Flickr, it made sense to try and syndicate it back to a blog of some kind. Alan Levine is the king of all things RSS. Some of his posts led to working solutions.

Recently, Alan posted where to find the Flickr RSS feed for public photos. Instead of grabbing everything I post (often more than one photo per day) I wanted to grab just one. I was going to manage that by adding photos to an album, but you can’t do that anymore (not with an RSS URL, anyways). Sad trombone. So, I stick to tags.

I fired up a subdomain – photoyear.ohheybrian.com and installed WordPress and the FeedWordPress plugin to syndicate posts from any RSS feed. Running out of the box, it grabs the tag only from the XML:

The immediate problem is the size of the embedded image. The src attribute is https://farm1.staticflickr.com/543/31231759933_ba613deec1_m.jpg, meaning the medium sized image is embedded, which is tiny. I had to get brave and check out a PHP solution.

In functions.php, I added a new filter (thanks to the suggestion of…you guessed it…Alan) which simply changes the _m.jpg to _b.jpg for a nice, full-width image.

I’m still not totally happy with my PHP solution because each post is going to come with, “a new photo by bennettscience” appended at the top. I tried using a preg_replace function to find the string and remove it (it’s always the same), but I didn’t have any luck. If you have a suggestion, I’d love some help figuring that snippet out.

Anyways, all said and done, you can check out The Photoyear now and then to see what’s up. I’m looking forward to reading back over everything as we move through 2017.

Introducing: Endnote Generator Add On

A year ago, I posted a script which converts footnotes in a Google Doc into endnotes. I’ve gotten great comments and feedback and have made the standalone script better, which is still available.

The Endnote Generator Add-On is now available in the Chrome Web Store for Docs. Install it once and you’ll be able to create endnotes from the footnotes you’ve already inserted in your doc.

There are a couple of known issues (pictures with a footnote aren’t included yet), but if you run into anything, leave a note here or on GitHub.

Check out the Endnote Generator.

Tracking Missing IDs with a Google Sheet

Like many schools, our students are asked to wear their student ID’s when they come into the building each day. During the day, they don’t need to have them on…just in the morning so we can make sure people coming in are part of our student body. If a student doesn’t have their ID, we issue a temporary and track how many times they don’t have one when they come to school. This process used to be done by hand each morning. A queue would form and a teacher would write down ID numbers (they all have them memorized) and names. This caused several problems:

  1. If a student gave a fake ID number, office staff wouldn’t know until much later in the day.

  2. The paper copy of the ID list was given to a secretary to transcribe into a spreadsheet.

  3. Transcribing meant looking up the ID in our SIS and then manually entering the number of times the student didn’t have their ID.

  4. When benchmarks were hit (5, 8, 11), disciplinary action was issued and followed up on by an assistant principal.

I spoke with the head secretary and we worked out a custom Google Sheet to do all of this automatically. Now, the duty station is equipped with a Chromebook so the teacher issuing IDs can quickly check veracity of the given ID and have all of the tally work done. This (mega) post outlines the sheet structure and custom code used to do the work.

The template and source are linked at the bottom of the post.

The Sheet

The Google Sheet is split into four tabs:

  1. Raw: Blank sheet with Timestamp, ID, and Name.

  2. Lookup: database of student ID numbers and the associated name/grade.

  3. Aggregate: ID, Name, Grade, Count, Cons 1, Cons 2, Cons 3.

  4. Daily groups. Filter, ID, Name, Grade.

Raw Input

This is used by the duty station. The date and student ID numbers are entered as students are given temporary bands. The Name column is populated by an ID lookup using =index(lookup!B:B, match(B2,lookup!A:A)). Both VLOOKUP and INDEX/MATCH are notorious for mistakes with large data sets, which is why the lookup tab is filtered by ID number and not alphabetically.

Aggregate

The same student can have multiple dates of entry and this sheet aggregates by student ID. Using UNIQUE, it pulls each ID as a single column. Then, I used another index-match function to populate the name and grade. To count the number of missing ID instances, a COUNTIF function worked well to count the number of times the ID number shows up in the raw sheet.

Scripting

At this point, much of the work of the sheet was moved over to Apps Scripts. We needed some dynamic results and using a script to sort through the volume of information was much faster and more reliable than using regular Sheets functions. I’m breaking the code into chunks to better explain the purpose. In the actual sheet, all functions are in the same file.

Function 1: Globals and menu item

I use a couple global variables (not super efficiently, though). I also create a custom menu to run sheet functions. This takes care of creating those items. I have mine at the top of the script, but they can go anywhere.

Function 2: Listing dates a student was missing the ID

For reporting, it was helpful to know which dates a student was missing an ID. Rather than adding n columns to fill with dates, a script was used to look up the student ID number and then add a note to the cell with the dates.

Function 3: Populating a daily list of students

Each day, the secretary checks for students who have hit a benchmark: 5, 8, or 11 missing IDs. Searching through the aggregate list isn’t feasible, so a script does the search and then returns the results dynamically.

This checks two conditions: A) The number of missing IDs is equal to or greater than the target and B) there is no consequence filled in the appropriate column. If the consequence has been assigned, it’s in the SIS and doesn’t need to be entered by the secretary.

Function 4: Assigning consequences

When the office staff pulled the daily list, they go into our SIS and update the disciplinary action or other notes. They would still have to go back and document that consequence being assigned in the aggregate sheet. Rather than scroll through the list, the appropriate consequence for the target is now filled in when the list is generated. The daily list isn’t cleared until the script is run again with a new target.

If, by chance, there are no students to assign a consequence to, a popup is shown to let the user know that no students meet the criteria.

The Result

In the end, we’ve removed two steps from the administrative process, but they were the most time-intensive steps. Rather than looking each student up and then remarking a spreadsheet, the staff member needs to simply pull the list of students for that day.

Computers are great at repetitive tasks, so let’s use them to do those tasks. There’s definitely some optimization that can be done, especially in the last two functions as they pass those arrays around like a cold. If you make updates, please comment and share back. The code is hosted on GitHub, so you can fork and update as much as you’d like.

You can look at a copy of the template or just make a copy for yourself and start poking away.

The entire source (not broken up) is hosted on GitHub Gists.

Turning an Old Laptop into a Video Kiosk

My father-in-law came to me with an interesting idea. He wanted to create video kiosk for our church which would play videos on different mission organizations we’re involved with. The wall – previously – had photos and text about each missionary or organization, but he wanted to revamp.

We initially tried to use PowerPoint and a custom keyboard to jump to different slides, but maintaining and updating that system wasn’t going to be very elegant or user friendly. So, about a year later, I had an “oh, duh” moment and realized we could do it as a static, locally-hosted website. It would be easy to build and even easier to maintain, so that’s what we did.

In this post, I’ll talk about the hardware and software we used to get it finished. There are still some things to hammer out, but the bones of the project are done and tested successfully, so it seems like a good time to document it.

The Hardware

Our initial idea was to use a Raspberry Pi 3 to do everything because of it’s low price point and small size. Unfortunately, the RPi, in general, doesn’t handle web video too well. I looked into using the kweb minimal web browser, which hooks into the native OMX Video Player on the Pi, but you can’t customize the look and playing full screen video had lots of menus. In the end, it was turning into a huge job to make it look polished, so we moved away from the Pi.

My brother-in-law had an old HP laptop that had died when he tried to update it to Windows 8 (insert snarky Microsoft joke here). So, he donated it to the cause.

I wiped the hard drive and installed Ubuntu Linux 16.04 LTS. It’s pretty lightweight and gets consistent updates. It’s also really user-friendly in case there is a problem with the laptop, so one of the office assistants can troubleshoot if I can’t make it. I also chose to stick with Linux because I can use SSH to log in via my Chromebook on Sunday mornings and run updates remotely if I need to.

flickr photo shared by bennettscience under a Creative Commons ( BY ) license

You could definitely argue that running a full desktop environment for a simple web kiosk slows the machine and introduces a bunch of variables that could cause things to go wrong, which is 100% accurate. OMG! Ubuntu! has a good article on how to either convert a full machine to a dedicated kiosk or how to build one from scratch, but since I didn’t find the article until we were almost done, I decided not to go back and rework everything.

For user interaction, we grabbed an Intuos Art Small tablet from Wacom for $100. It’s seated in a wall mount to lock it in place and hide the wires. Essentially, it’s a giant touchpad.

flickr photo shared by bennettscience under a Creative Commons ( BY ) license

Finally, we bought a 55″ wall mounted TV. The laptop had an HDMI port, so that took care of high-definition video and audio.

The Software

I built the page with plain HTML and JavaScript. It’s currently being hosted locally on the machine to ensure smooth video with no buffering. I’m planning on testing the broadband rates via ethernet next time in church because over wifi we ran into issues. If I can get a good download rate, I’ll switch the site over to GitHub Pages so I can update remotely.

Source

The HTML and CSS is pretty standard. I didn’t want a ton of bloat, so I coded everything from scratch. You can take a look at the markup on GitHub. There’s also a live example so you can see how it’s rendered.

First, this is a hallway display. There will probably be times where people aren’t watching videos, in which case I want to avoid burning an image into the screen of the TV. I added a small jQuery function to bring up a prompt if no one touches the trackpad for 30 seconds. This also turned out to be handy because a lot of people walked up to the tv and tried touching it directly rather than using the trackpad input.

To play the videos, I wanted each container to reference a hidden video div. I use jQuery to handle the element selection and JavaScript to pay attention to the play state. When a user clicks the tile, a fullscreen video starts playing. There is no keyboard for them to quit out of the video, so I don’t worry about keypress events. If they jump out of fullscreen using the playback controls, it saves the video location.

Ubuntu tweaks

There were also some software tweaks I needed to make on the machine itself.

I wanted a standard user to log in automatically. So, I created a generic user on the system and dropped the source files onto the desktop (more on that in a minute). Theoretically, the user will never get out of Chrome because there’s no keyboard available. When the computer boots, it logs into the generic user right away.

Then, I edited the Startup Applications option. You can launch Chrome from the Terminal and you can specify which command to use in the settings. Using:

chromium-browser –kiosk [URL]

launches Chrome in the full screen kiosk mode and displays the website immediately after login.

The laptop is mounted on the wall behind the TV. Ubuntu wasn’t recognizing the monitor when the lid was closed. There is a flag in etc/systemd/logind.conf that handles a dock, but we weren’t using one. So, I had to change the HandleLidSwitch flag to ignore to ignore the lid being closed (thanks to this answer)

Finally, because the laptop is mounted behind the TV on tracks with a padlock, it’s a pain to take it out to turn it on and off. I was able to automate the daily shutdown pretty easily by specifying a time using crontab -e (you have to be root to shut down). Startup was harder.

After some research, I found that most computers have something called a Real Time Clock (RTC) synced with UTC. It can be used to set an alarm to wake the computer. You can test this by setting the clock to zero with:

sudo echo 0 > /sys/class/rtc/rtc0/wakealarm

and then resetting it with:

sudo echo `date '+%s' -d '+ 10 minutes'` > /sys/class/rtc/rtc0/wakealarm

Now that I knew the computer would turn itself back on, I could create a simple bash script to run with cron that would handle startup and shutdown daily:

I stored the file in /usr/bin and made it executable with chmod +x.

Then, I edited crontab -e to run the script daily. Note that this specifies the shutdown time. At 8 PM every day, the computer will shut down. The shutwake script resets the RTC alarm:

0 8 * * * * /usr/bin/shutwake

cron can be picky, so if you need more help, this article helped a lot.

The last thing we needed to work out was muting the audio during sermons so someone didn’t crank out a video in the middle of church. The video will still play with captions (accessibility FTW) and muting the audio turned out to be not too bad. I can toggle the pulse audio driver in Ubuntu with a simple cron job that runs on Sundays at 9:00 and 12:00 to turn the audio on and off:

0 9 * * * 0 amixer -q -D pulse sset Master toggle

flickr photo shared by bennettscience under a Creative Commons ( BY ) license

In the end, I’m really happy with how it turned out. Remote management and simple setup led to a really effective display for the wall.

If you want more specifics about software or construction, leave a comment.

Hacking Together an Auto-Tweeting Spreadsheet

A while back, I had looked at automating tweets from a Google spreadsheet to reduce the insane number of clicks it takes to do in TweetDeck and HootSuite (5 clicks? Really?) I hit some roadblocks and let it slide because in the long run, it wasn’t really important to me. More of a fun experiment.

I jumped back into it a week or so back to try and solve the last little problems. [I was able to create a script](https://docs.google.com/spreadsheets/d/1kbFIfYGm2sGQJ5TistkaxMzqUqQ8HbgUvJUM8MohOb4/edit?usp=sharing) which loops through a spreadsheet checks the current date and whether or not the tweet has been sent. If those conditions are met (`TODAY` and `NOT SENT`), it will automatically post the tweet.

The sheet, like all the other Twitter sheets I’ve used, is run with [Martin Hawksey’s](http://twitter.com/mhawksey) fantastic TwtrService library. It allows you to authenticate and tweet right from Google Apps Script and saves a _ton_ of time.

I ran into a problem that is [as-yet unsolved](https://stackoverflow.com/questions/37308911/pause-a-loop-to-wait-for-rest-api-response): I can’t get the sheet to stop after posting one tweet. So, if you have multiple tweets on a given day, it will send _all_ of them at once. That’s not good, especially if you’re promoting an event over a period of time. I’ve tried a number of solutions, but I can’t seem to find one that works. I’d love to hear if you’re able to take the source and tweak it to work.

In the meantime, Martin also took a (much more elegant) pass at the task. [His sheet is also available](https://docs.google.com/spreadsheets/d/10U7Rrr7lfbRS2A8QYRUWL8enlJfat75-QuGm7slKXRE/edit#gid=0) and works really well. The goal is the same, but his mechanics and implementation are much more refined and effective.

It’s a good example of multiple ways to skin a cat. I’m a novice coder (I tell people I know enough to break something) and he’s an expert doing all kinds of things. The great thing is, all of this code is open and available. I can make a copy of Martin’s page and dig into his solution. I learned a few tricks about checking for multiple conditions, which is what I was struggling with. I became better at scripting through my failure and his success.

Graduate School Final Project

I promised Stacy Lovdahl I’d post this the other day and then promptly forgot. Sorry, Stacy.

I took a graduate course on curriculum development and implementation this semester through Ball State. Not many of the assignments were open-ended enough to make for interesting blog fodder, so I didn’t post much. For my final project, I chose to redesign the course of study for my school based on an inquiry model. There are two components: the redesign document and the theory backing it up. They’re both available as Google docs with comments opened.

Many, many thanks to Michelle Baldwin and Kelly Tenkely from Anastasis Academy for late-night questions about inquiry mapping, standards reporting, and pretty much anything I couldn’t wrap my head around. Check out the work they’re doing…it’s amazing.

Updating the getText Extension

In October, I wrote about a bookmark application to get the text of a website for analysis in the Lexile analyzer tool. It’s a pain in the keister to copy text from a website, open a document, paste the text, save as a plain text file, then reupload to the Lexile website. The bookmark tool does the hard part for you (the clicks…all the clicks…) so you just download the formatted file and upload it to the Lexile site.

Depending on what you try and analyze, you might get an angry message like this one:

Seriously, who uses ASCII?

Nerdspeak, engage!

The only reason you’d use ASCII is if you, A) want to support legacy browsers (Netscape Navigator 1.0 anyone?) or, B) need to speed up query time on a string or a database. Modern web browsers are so much more efficient now, most sites use something called UTF-8. I’m guessing ASCII is needed for the Lexile analyzer so it can give you the score faster than it could with modern encodings.

Anyways, I pushed a fix to the applet tonight. It works by taking the text you highlight and encodes it to ASCII before downloading. So, still no conversions. And no angry red messages. And no more worrying about encodings.

You can grab the extension here or take a look at the source for yourself and tinker around.

More Scripts to Make Life Easier

It’s been an intense week of teaching, church band practice, and Google Apps Scripting. I’m really focusing this year on using the computer to do what it does well so I can focus on doing my job better. In particular, I’m using the desktop to do repetitive, marginal jobs as efficiently as possible.

This week, I’ve got two new tools in my belt to help out.

docTranslate

I happen to have a higher number of ESL students this year, some of whom are brand new to the country. Besides feeling more and more awkward about only speaking one language myself, I needed to find a way to help them with the language barrier.

After speaking with our ESL specialist, she gave the okay for me to print Spanish on the back of my English notes pages. (I was concerned about creating crutches, but she assured me that it would be more helpful than harmful in the long run. I need to learn Spanish.) Taking my notes, one by one, and putting them through Google Translate would have taken way too much time. So, I turned to a script.

doctranslate.gs

// This function converts a document from English to Spanish quickly.
// The post from http://stackoverflow.com/questions/25509159/how-i-can-get-the-textwrap-image-in-google-doc/25509591#25509591
// was helpful for creating the logic to check for images in the document.

function translate() {
  var doc = DocumentApp.getActiveDocument();
  var body = doc.getBody();

  // Add a page break for the translated material.
  body.appendPageBreak();

  // Get the number of elements in the document
  var elements = body.getNumChildren();

  // Use the number to loop through each element in the document.
  for( var i=0;i

This script is still a little incomplete, but it does the trick. You can read through the code to see what exactly each section does. I’m probably going to turn this into a Doc Add-On in the future, but that’s a little fuzzy right now because I can’t imagine when I’ll have the time to do that at this point. Some things to pay attention to:

  1. All images and drawings have to be inline for it to work. Googles Apps Scripts can’t see other types of images yet. It’s faster to make the copy and then reformat how you’d like.
  2. Formatting isn’t always carried perfectly. Again, it’s about the minimum-viable-product right now. Spot check the translation for format errors if that matters to your doc.
  3. This is document-specific…at the moment. You’ll need to recopy the script each time you want to use it.

I had a fluent Spanish reader check the grammar (Google Translate can be notorious for some weird translations at times) and he gave it a thumbs up, so take that how you will.

driveBox

Whiteboarding is a big part of this year in class. I want students investigating, collecting information, manipulating it, and building an argument. A lot of times, class ends before they have a chance to get clean work on paper. I needed a way for them to send photos of their work in at the close of class.

Of course, email is out. I guess that makes me old now.

We’re barely scratching the surface of Google Apps for Education at school – teachers are starting training this semester and student’s haven’t had their accounts opened up yet, so sharing back and forth isn’t really possible yet.

Some Googling turned up a great alternative, still using Google Apps, to create a public dropbox with scripts. In 20 minutes of finnagling, I had a working dropbox page which allowed students to submit things straight to my Drive with three taps. I’ve modified mine slightly from the blog post linked above. Unlike the translate function, this one requires two files.

form.html

 


  

Student Work Submission

server.gs

/*
    This is based on the template shared by Amit Agarwal (@labnol) on
    the blog, Digital Inspiration. The original post with instructions:
    http://www.labnol.org/internet/receive-files-in-google-drive/19697/
*/

// Find the form that is collecting the information to upload.
function doGet(e) {
  return HtmlService.createHtmlOutputFromFile('form.html');
}

function uploadFiles(form) {

  // Check for a folder called "Student Files" in Drive. If it's not there,
  // create one.
  try {

    var dropbox = "Student Files";
    var folder, folders = DriveApp.getFoldersByName(dropbox);

    if (folders.hasNext()) {
      folder = folders.next();
    } else {
      folder = DriveApp.createFolder(dropbox);
    }

    // Once the folder is found, create the new file
    // The file is named by attaching the Period Number to the Student Name.
    var blob = form.myFile;    
    var file = folder.createFile(blob);
    file.setName(form.classPer + form.myName);

    // Display a success message to the user.    
    return "File uploaded successfully. You can now close this window.";

    // If it fails, display the error message.
  } catch (error) {

    return error.toString();
  }

}

Those blocks of code turn into this:

2015-08-14_16-17-18

Pretty easy to figure out what to do.

If you’re interested in using either of these scripts, let me know on Twitter – @bennettscience and I can help you get them set up. Like I already mentioned, I think the translate function would work well as an Add-On, and if I get there, I’ll write another post with instructions on how to get it.

More on the scripting I’ve been doing lately…