Data as Actionable Information

Continuing on my data-trends this week, I’ve realized (consciously, at least) that when quantitative data is shown to students, it’s a report of some kind: a report card, a progress report, test scores, etc. It’s all final information – it isn’t actionable.

Yesterday, we played a review game where students submitted answers to a Google form which then spit out a grade based on those submissions. The purpose was to get them thinking about the information and to give them some immediate feedback – as a class – on how they did.

2015-04-29_12-59-40

Seeing the gauges along with their place against all reported classes helped visualize where they fell on a preparedness scale in context.

It also helped the class see what specific standards they needed to go back and look over. I didn’t want individual information for this activity because the goal was generalize, actionable information.

Michael Pershan prompted my thinking this morning with his video (and comments) entitled, “Why Our Hints Don’t Help.” It’s extremely insightful (go take a look) and really helped me think through how I talk about quantitative information. It should be a call to action, not just a report.

It also changes the way students see these scores. They aren’t final any more – it’s a spot check on their progress. It’s a reality check for their perception of how well they have learned the information. It also leads to more good questions like, “Where can we find more information on [X]?” It’s a visual prompt, more than anything, which helps set the course for subsequent learning.

Using Confidence Data with Student Responses

I tried something new today with my students – I asked them to rate their confidence in their answers on a 1 (total guess) to 4 (definitely know I know it) scale. I collected the aggregate information for each class and started flagging issues.

First, this only took me about 5 minutes to do. So, after we finished the questions, they started working on their review for our test this week. I pulled in the numbers, ran a formula or two, and had the percent correct for each item as well as the average confidence. Then, we started talking about what they noticed.

Some noticed right away that a few questions were more missed than others. Someone also noticed that questions with a high percent correct tended to have a high confidence rating. The same was true for lower-scoring questions. I then pointed out what I was really interested in:

Discrepancy.

I saw nods and furrowed brows as I pointed out low-scoring questions with high confidence ratings. It doesn’t compute. If so many people got it wrong, why were we so sure it was right?

This highlights areas I need to go back and review again, which is really nice. It also helps students reach a metacognitive place on their own work – it was only 6 questions, so they know what they got right and wrong.

2015-04-27_14-53-50

2015-04-27_14-56-01

Different classes, different priorities.

And then here’s the aggregate data for all classes:

2015-04-27_15-08-56

For now, I’m adding the red flags myself based on an arbitrary percentage and confidence level discrepancy. I’m doing that because I don’t know of any other way to make that comparison. So, here’s the question:

Are there statistical analyses that can be done on two-variable data (correct vs. confidence) which can then highlight areas significantly lower than expected?

I even went so far as to perform a correlation test on the data, which shows a definite positive correlation between the score and student confidence:

2015-04-27_15-59-27

But again, I don’t know how to set up that discrepancy benchmark statistically. I may be thinking too hard about this, but I’d really like to have more to go on than, “This one is low compared to confidence.” It’s also a very small sample, and I know most correlation tests (t-Test, ANOVA, etc) require 10 samples, usually, so I’m not putting a ton of weight on the stats.

If you have any experience in this kind of analysis, I’d appreciate some pointers in the comments.

Notifying with SMS

I meant to grab a bag of stuff for school this morning on my way out the door. Turns out at 5:45 in the morning, you don’t remember to grab things off the counter (thankfully, I don’t need that stuff until tomorrow). I don’t like to-do list apps (prefer pen and paper), and my phone is on silent each morning until I get to school.

I hate notifications anyways, which is why I turned everything off other than phone calls and text messages. I still get calendar reminders (out of absolute necessity) but even then, I tend to close those without thinking too much about actually remembering to do something.

Texts, though, are a different story. I pay attention to them. When someone sends a text, it has my attention.

So, rather than rely on a to-do list or plain calendar reminders to do something, I’m routing those through IFTTT to send myself a text.

To do this, log into IFTTT and create a new recipe using Google Calendar as the trigger. In step 2, be sure to choose “Event from search starts” as the trigger. Otherwise, any time you add an event, you’re going to get a text message, which isn’t cool.

2015-04-27_11-40-49

Then, pick a keyword – I used “reminder” – to search for. It has to be in the title or the event description to fire correctly. So, for me, my calendar event would be: “Reminder – get groceries.” Then, set the time to when you actually want to do whatever it is you need to do. Again, mine is around dinner tonight because I’ll be near the kitchen and can do it quickly.

Set the SMS channel as the action (make sure your phone number is correct!) and heypresto, you have a SMS-enabled text reminder system for those little things you tend to forget.

Again, I know that this can be done just using the calendar notifications. But, if you’re like me, and hate those, you can use this instead. Also remember this works with any phone, not just smartphones. So, if you revert (like I’ve also considered) you can have the benefits of notifications when and where you need them on any platform.

If You Have It, Use It

I took the time the other day to show my students what exactly I look at when I’m grading tests. I use standards based grading, so I’m going way deeper than just the number right divided by the number wrong. I don’t even put tests into the gradebook, really. They’re each broken down into the standards contained within and they’re reported individually.

This is really hard for students to understand. So, I showed them the data I look at when scoring their tests.

2015-04-19_21-15-32Individual student report

Each student has a report generated (name obfuscated here). Questions on the test are broken down by standard, and I’m shown a quick, high-level gauge of where they stand on this assessment. This is not the only information I take into account when assessing their skills, but a look through this lens can be helpful. I can also see (further down the page) which questions they get wrong and compare those with the item analysis screen.

2015-04-19_21-15-31Individual class report

I can also take a high-level look at each class. Mean score and deviations are given for the entire assessment, and each standard is broken down again based on the aggregate. This view is especially helpful for remediation and follow-up to the test. Some classes get more attention to one standard, others get different priorities. It’s really helped me make more effective decisions when planning following an exam. This page also has individual student scores broken out by standard down below the gauges.

2015-04-19_21-22-35Item analysis

Finally, the item analysis. This helps bring it all together. Again, I see the high-level information for each question, and it helps me pinpoint which items need the most attention. I can then cross-reference those with each individual class and even down to the individual student to see which standard they struggle with the most.

So yeah, I showed this to my students.

And they were surprised that I put so much thought into it. I also think it helped them see the fact that everything we do in class is two things: related to the standards and for their benefit.

I don’t give tests because they’re fun (ok, well, the analysis can be fun…) or because they’re super effective. They can help me make informed decisions which, in turn, help them in the long run.

I don’t think they’d ever seen information used in this way in such an honest and straightforward manner. Many of them actually expressed that I was putting myself through too much with these tests, which means I must’ve gotten through something…

Either way, if you’re collecting information, make sure it’s used productively. Also consider taking it up a notch and plainly showing what you do with that information just so they know.

Making Endnotes from Footnotes

**Update 12.22.2016** Since this has become quite a popular script, I’ve converted it into a Google Docs Add On so it can be used across documents. Check out [the blog post](http://blog.ohheybrian.com/2016/12/introducing-endnote-generator-add-on/) or the [Endnote Generator website](http://dev.ohheybrian.com/endnotes) for more information.


**Update 8.27.2016** As Phil noted in the comments, the script didn’t format anything over two digits in length. That has been [corrected in the source](https://github.com/bennettscience/footToEnd). If you need more (100+ endnotes may be too many, though…) leave a comment and I’ll go back and make the changes.


Krissy Venosdale asked on Tuesday if there was a way to make endnotes in Google Docs as easily as footnotes. I thought, “Surely there is.”

Well, there’s not.

So I wrote a script that does it.

It’s hosted on GitHub along with instructions on how to install the script (it’s not an add-on…just a custom script) and run it in your document.

Here’s a peek:

When you add the script to the document (manual process right now – read the instructions) it adds a custom menu from which you run the script.

I know endnotes and footnotes aren’t really used in high school, but if you’re writing a book or something…maybe it’ll be helpful to you.

Either way, it was fun to write and it works pretty quickly. I haven’t tested it with documents more than a few pages long, so if you’re going to try it on something major, I’d suggest making a duplicate copy to run it on first, just in case.

I’m a teacher by day, after all.

Transparency Matters

Some backstory.

A month ago I wrote about Geddit going out of business this summer. I mentioned another response-system app, Kahoot!, which has exploded over the last few months. Specifically, I called out their Terms of Service (TOS) which states that any content a user uploads gives Kahoot! a royalty-free license, yada yada yada. With no qualifying statements, that’s a lot of rights to give up.

What I didn’t expect was a tweet from the CEO of Kahoot! asking if I’d be willing to have a conversation about their TOS.

So we did.

I can’t give specifics about our discussion, but there are some themes that stood out to me:

1. There can be major cultural differences when TOS are written up. Kahoot! is a company based in Norway, and Norway is a country where privacy laws slant heavily in favor of the consumer or user. Not so true here in the States. Johan and I talked about this a little, and he admitted that when Kahoot! was launched, they thought the Norwegian TOS benchmark would be self-evident and that users wouldn’t really worry so much about data loss. I appreciated his honesty in the admission that they are arbitrary, and in countries like the US, some clarification would help.

For the record, that license you agree to is so the community can function. Without granting rights to the company, they’d have to get individual permissions to share any lesson uploaded with any one other person. Totally reasonable, and clarification on that point would be helpful.

2. Kahoot! isn’t interested in personal data because it doesn’t help the service. Johan explained that they started as a formative assessment service – something in the moment and not perpetual. That’s why students don’t sign in. Data is given right back to the teacher as an Excel or csv download. As a company, Johan said they’re focusing on how information is being learned, not what information, which is why storing scores isn’t really important. If a teacher is using it formatively, then the actual score itself should only be informing the next instructional step, not to track progress over time.

It reminded me of a conversation I had with Bob Ambrister, the developer behind Write About. Paraphrasing, he said that nothing about the student helps run the service. All they ask for is a teacher’s code (to link the student with the class) and a name. No email, no birthday, SSN, or twitter login. If it doesn’t help, Write About doesn’t want to collect it.

Johan echoed that sentiment (without prompting), which made me feel much better about using it with students. A pin to get to the quiz and a nickname. Now, we didn’t talk about storing the data on servers, but given that a student can put in whatever nickname they want, it would be pretty hard to link that information back to anything with value. The key is that they only take what matters in order to run an effective formative assessment service.

3. Transparency is more important now than ever before. Clarifying statements and human-readable TOS and Privacy Policies say a lot about the credibility of the company. If you’re willing to clearly and concisely explain what you’re collecting and what you’re doing with it, people tend to trust you more. I also like that Johan was proactive in reaching out to discuss some of the concerns I brought up.

Finally, Johan did hint at some changes the team is working on regarding transparency. I’ll update this series again once that’s pushed out.

Denotification

I went about four months this year without a smartphone. I used an unlocked flip phone that barely made calls and didn’t have reliable text prediction. I was back to thinking in threes, and I can tell you on which number lies any letter in the alphabet without thinking too hard. The only times my phone made noise were when I received calls or texts, and that’s if I remembered to turn on my ringer.

Then, I bought a cheapo smart phone after reading a great article on Medium about minimalistic thinking when it came to buying a smartphone (the author uses the word “shitphone,” which I like, but find a little crass for day-to-day discussion).

I bought my own shitphone economy smartphone and rejoined the world of tweeting, instagramming, emailing, and other various -ings that I’m supposed to do with one of these things. I also rejoined the world of constant notifications. Buzzing, beeping, and LED blinking.

I hated it.

These low-angle shots really make it look more sexy than it is.

I’m enjoying the rate at which I can send a text with a full keyboard, which is what really keeps me from bouncing back to ye olde flip phone. Other than that, I enjoyed being relatively disconnected. I enjoyed reading email when I happened to open it during the day. I enjoyed not knowing someone had liked a photo I put up. Notifications fed my ego and pushed me back into always wanting to know what was up.

So I turned them off.

I find it eerily similar to turning off tracking and stats for this site. I enjoy writing much more. I’m enjoying my life with a smartphone much more now that it isn’t always squawking at me.

All my phone does now is buzz when I get a call or a text. And that’s if I remember to turn my ringer on.

I have a shitphone economy smartphone with the noise-making capacity of a flip phone. And I love it.

The Talk

We had school on Good Friday this year because of snow day accruals, which means we had a ton of students out. I wanted to give one short assessment to see what needed to happen when we returned from break. Students who were out of school had an option to complete the assessment online (totally their option. I made it clear that family trumps chemistry every time) so they didn’t need to worry about taking it when they got back from break.

I wanted some method of keeping the questions secure, so I decided to use a Google Form to keep track of access to the document.

From the class website, I posted a link they could use to get to the test.

2015-04-03_9-04-06

From there, they were taken to a Google form with the following statement:

I understand that taking the test from home is at my own convenience. By signing my name below, I affirm that I took this test without the aid of outside resources such as notes, a textbook, or the Google. My performance is accurate and reflects my current learning in chemistry.

They then had to type their name in the box and submit.

Once submitted, they were given a link to the test so they could take it.

I’m not naive. I know that they could just grab the link and pass that around. That’s not the point.

Standards-based grading helps with the conversation piece – their grade is attached to the learning they’ve done during the chapter. You know the information, or you don’t. It makes grade conversations much easier because students recognize that they haven’t proven their learning…yet. That also gives me a much more solid platform for catching misrepresentation of ability. I know what they can do, and the test scores usually match up pretty closely.

Which brings me to the second point: honestly representing what you can and can’t do goes beyond the classroom. Don’t fake resumes. Don’t fake online profiles. Be who you are, and that means being honest with what you know and don’t know. By signing the form and then taking the test, the student is entering an agreement that gives us a starting point if uncomfortable conversations need to happen.

Creating docs with Python

Another extension to what I wrote last night, mainly for my own purposes so I don’t forget what I did or where I put scripts.

bash is great, but it doesn’t play so well with Windows. I tried Cygwin, which installed fine, but on the school computers, filesystems are locked down somewhat and I can’t get access to certain install directories. Rather than fight the system, I translated my bash script from yesterday into Python so it works on any OS.

makedoc.py

print "You are about to make a batch of files."

num = int(input('How many files do you want to make?: '))
name = raw_input('What should the base filename be?: ')

for i in range (1,num+1):
  filename = name + str(i) + '.doc'
  print 'Made ' + filename
  i = open(filename,"w").close()

This works nearly the same, only I didn’t make it executable like I did the other script. The other thing this does is makes it easier to use the Google Drive API to create docs right in Drive rather than upload .doc files up to Drive and then wait for a conversion.

Automate ALL the Docs!

I’ve been programming a lot lately, mainly to make my own life easier. I usually start something when I’m doing a menial task that a computer could do much more efficiently. As a result, I’ve learned just enough to know what should be possible, but not enough to know when something is a good idea and something is a really dumb idea. This one turned out to be a good idea.

Two months ago, I wrote up a way to automate rubric score reports using docappender in Google Drive. It worked great the first month.

Today, I went back to score a second set of writing prompts with the rubric. docappender works great to do this, but I have to go into Drive and create six separate documents with the class hour as the name for the form to work correctly. Google has no batch-create option, so I had to make each one individually. It didn’t take too long, but it’s something that really should take any quantity of time.

Major props to Ken Bauer and Alan Liddell for pointing me in the right direction. Here’s what to do…

The Script

Computers are good at making documents. One way to do this on a Mac (or Linux, for that matter) is to create a shell script which can be run from the terminal. Alan gave me this little snippet to get me started:

first-script

for i in $(seq 1 7);
do
touch file$i;
done

That worked great. But, what if I wanted more than seven files? I wanted to give myself a couple variables which would make the script more flexible:

second-script

echo "Number of files: ";

read number;

for i in (seq 1 $number);
do
touch file$i;
done

And that was all well and good…but what if I wanted a specific prefix? And, could I make it right into a Google Doc? Yes, and yes.

third-script

#! usr/bin/env bash
echo "You're about to create a series of files."
echo "Enter the number of files you'd like to create and then push [ENTER]: "

read number

echo "Enter a generic filename and then press [ENTER]: "

read file

for i in $(seq 1 $number);
do
touch $file$1.doc;
done

echo "Finished."

Make it better

This was great. I could make files with less than three dozen keystrokes and clicks. But, like any good programmer, I wanted it to be done with even fewer – more automation was needed.

I wanted to make the script executable – I wanted it to run just by typing the name of the program into the terminal. So, I ran chmod +x makedoc (makedoc is what I called the program…real original, eh?) and gave it permission to run. I also added #! /usr/bin/env bash to the first line of the script to make sure it ran with the right kernel.

Then, I copied the script to my computer’s usr/local/bin directory and voila! All I have to do is go to the folder where I want to make the files, open up the Terminal and type in makedoc and make my sequence of files.

Get ready to have your mind blown.

“That’s great to make them on your computer, but you need them on Google Docs to work with docappender. Wasn’t that the point?”

Google Drive allows me to sync certain folders to my computer. The script doesn’t know if I’m in a local folder or one synced from the web. So, all I have to do is cd to the right spot in Terminal and run the script and BOOM – they’re all in Drive, ready to be used.

It’s. awesome.

Right now, it creates a Word document, which can be synced through the Drive folder. I would like to have a way to create a Google Document locally, but takes a little more computing power than I have time for tonight. Perhaps at a later date…