Improving Questions, Part 1

I’ve been thinking over an idea since I saw this tweet from David Wees which linked a very well-written article defending the Common Core standards.

I found myself a little jealous that a group of math teachers was using StackExchange as a forum for education discussion. If you’re not familiar with the platform, it is a question/answer forum that relies on two things: credibility and crowdsourcing. Anyone may ask a question and as you engage with the community, you are granted more and more privileges. For instance, to leave a comment on a question, you have to earn x number of points. Answering questions, accepting answers, up- or down-voting questions and answers – each action comes from having a good reputation within the community.

This helps the community both ask good questions (interesting questions will be up-voted and have higher visibility) and provide reliable feedback (you can’t troll on the site because you need points to interact).

What does this have to do with education?

The Research

Observations in research literature show that teachers are usually the ones asking the questions, students are responding. Additionally, when students ask, the questions are “informative” or “unsophisticated” (Harper, Etkina, & Lin 2003; Hofstein, Navon, Kipnis, Mamlok-Naaman 2005). This can be boiled down to the fact that students “are schooled to become masters of answering questions and to remain novices at asking them” (Dillon 1990).

I think part of the struggle in helping students become better askers of questions is that we, as teachers, are rarely trained in methods which can be used to help develop those skills.

Dori & Herscovitz (1999) decided to improve questioning with an inquiry-based approach. The students were given a problem to solve (cleaner air) by asking questions to guide their learning. The research used the jigsaw method so no single group was faced with an overwhelming number of items to handle. At the end of the experiment, they saw a significant increase in the number of high-level questions asked by students. Interestingly, there was no difference in the number of low-level questions asked by either group.

This illustrated that the inquiry group did not move entirely away from low-level, informative questions, but rather added higher-level questions in their exploration. The students were also not asked to identify why certain questions were high-level and others were not. Metacognition is important to the learning process, so there needs to be an additional layer of intervention.

Koch & Eckstein (1991) performed a study which analyzed physics students’ ability to comprehend written text using questioning as a basis for learning. They note that the typical question/answer format – in which students are given questions during or after reading the text – is “suitable only if there is a teacher or other guide available to formulate the questions.” To “prepare students to assume a more active role in the learning process,” Koch & Eckstein developed the “question formulation strategy.”

This is broken into two parts: answer/questioning (A/Q) and peer feedback (PF).


Developed specifically for teaching questioning skills, students summarize text, and the summary consists of questions, not facts like they’ve done for years and years. The students create three columns:

  1. Column 1 – Questions with answers in the text and the student believes they understand. The student also answers these questions on another piece of paper (or whatever medium you’re using).
  2. Column 2 – Questions with answers in the text but the student does not understand the idea.
  3. Column 3 – Questions related to the text, but are not discussed in the text itself.

Do you see what’s happening? There are three processes:

  1. Identifying what they understand and what they don’t (metacomprehension).
  2. Identifying content explicitly stated in text.
  3. Identifying causes for their lack of understanding (metacognitive). From the authors, “…was the answer never stated in the text and not understood, or was the answer not given in the text at all?”

Peer Feedback

Questions can be asked at length, and students can use the A/Q method explained above to improve their questioning habits, but there is little external quality control to the process. Traditionally, the teacher was responsible for giving quality feedback, but peer feedback can be just as helpful, which mitigates the workload and helps students expand on ideas more rapidly.

In short, students would read their questions out loud to the group and receive feedback on the spot. This process not only “helped students clarify fuzzy questions,” (1991) but it also increased comprehension through discussion.

The Implications

The study showed that students who used the A/Q format through the course had statistically significant higher performance on assessments than the control group. There was a second experimental group which layered PF on top of A/Q, and they had statistically significant gains over both of the other groups. These methods, when used in conjunction, help students not only ask better questions, but take and give qualitative feedback on the questions they are asking of the teacher and their peers.

I think StackExchange can work as a peer feedback for questions identified in the A/Q process.

Discussion in class is a powerful process. But questions come up when the class is not meeting. With question and answer forums like StackExchange, students can push the peer feedback portion of the process into an asynchronous environment. They are able to maintain the open forum and also – as a group – decide on the most important questions based on the voting process outlined earlier.

There is definitely an argument for this advocating a use of technology for the sake of using it, and I would agree in some cases. But, with the expansion of access points for students, using an online platform to help not only ask questions, but also develop the quality of the questions they’re asking, it becomes a much more compelling use of technology expanding the classroom opportunities rather than substitution only.


Dillon, J. T. (1990). The practice of questioning. London: Routledge.

Dori, Y. J., & Herscovitz, O. (1999). Question‐posing capability as an alternative evaluation method: Analysis of an environmental case study. Journal of Research in Science Teaching, 36(4), 411-430.

Harper, K. A., Etkina, E., & Lin, Y. (2003). Encouraging and analyzing student questions in a large physics course: Meaningful patterns for instructors. Journal of Research in Science Teaching, 40(8), 776-791.

Hofstein, A., Navon, O., Kipnis, M., & Mamlok‐Naaman, R. (2005). Developing students’ ability to ask more and better questions resulting from inquiry‐type chemistry laboratories. Journal of research in science teaching, 42(7), 791-806.

Koch, A. (2001). Training in metacognition and comprehension of physics texts. Science Education, 85(6), 758-768.

Sound for the Eyes

I read an article recently on the need for an “Instagram of Sound” – the idea being that I can record a short audio clip and then immediately share it to a network. After searching and searching, I couldn’t find the one I read, but there’s a good article on Motherboard which makes the case.

In the absence of such an app, I decided to try it with Instagram itself. I came up with a couple of rules:

  1. The video had to be still (or as little movement as possible)
  2. I couldn’t appear in the video
  3. The video had to be shot straight up – as if my phone was just watching from the table

I think it’s really interesting to think about how sound can communicate space, action, or surroundings. Our phones are with us everywhere…in reality, this is their experience. What do we miss if all we focus on is the visual? In fact, by playing videos on mute by default, I think Instagram is eroding the experience that sound can bring. I’m wondering if pushing the opposite direction will teach me something.

You can see the few I’ve done below, follow me on Instagram, or search #soundaroundme through the app (their silly API doesn’t allow for web searches).

It’s a Twitter Robot!

Update 8/20/2014 8:30 AM: I have deactivated the script for this Twitter bot. It was fun, and the process is below if you want to read more. But, the Twitter feed is now inactive.

I’ve been fascinated by the proliferation of non-spammy Twitter bots in the last year. Chatbots have been around for a long time (remember SmarterChild on AIM? Anyone?) and they’ve migrated to Twitter. One of the more famous (and in the end, decidedly disappointing) chatbots on Twitter was @horse_ebooks. It would tweet non sequiturs at various intervals and currently, even though the account is no longer active, it has 203,000 followers. Twitter isn’t just for things with fingers anymore.

creative commons licensed ( BY ) flickr photo shared by JD Hancock

I think bots are fun because we can make them close to sounding normal, yet slightly…off. A turn of phrase is correct, but it doesn’t sit right. It’s a look into what we could come up with ourselves, but weren’t quite clever enough to pull off. In fact, “chatterbots” have been around since the 1990’s and there is an annual competition each year for the Loebner Prize, which is based on the Turing Test for true artificial intelligence.

Twitter bots are subverting the way the larger population thinks about online communication and how computer scripts running at intervals can become not only really convincing, but incredibly entertaining parts of our daily experience.

    • *I started by building a simple bot which would search for “Shakespeare” on Twitter using the

Twython library from GitHub. Essentially, it lets you plug into Twitters REST 1.1 API using a python script. You can check out @ShakeTheBard to see some of the early tweets. That wasn’t much fun, though, because it mostly pulled quotes from plays. So, I took it one step farther.

The Markov Chain is an algorithm which can be used to generate random sequences (in this case, sentences) based on probability. So, in essence, it looks for a group of words – two or three at a time – and then determines a likely follow-up based on the frequency of those words and the text following them in the sampel. From StackOverflow:

  1. Split a body of text into tokens (words, punctuation).
  2. Build a frequency table. This is a data structure where for every word in your body of text, you have an entry (key). This key is mapped to another data structure that is basically a list of all the words that follow this word (the key) along with its frequency.
  3. Generate the Markov Chain. To do this, you select a starting point (a key from your frequency table) and then you randomly select another state to go to (the next word). The next word you choose, is dependent on its frequency (so some words are more probable than others). After that, you use this new word as the key and start over.

Sounds confusing, because it is.

I have a text document with every sonnet Shakespeare wrote. All 154. So each time the program runs, it chooses a starting point at random and generates a unique line of poetry based on the frequency of that choice as it goes through the algorithm. Finally, it tweets that line.

    • *A lot of people use their own Twitter archive to make bots of

alternate-reality selves, but I haven’t gotten that deep into using the Twython library and pairing it with the Markov Chain library I found. So, for now, Bill is tweeting sonnet mashups. Some are pretty good, others not so much. But that’s the fun.

One of my favorites from testing (unfortunately, not tweeted) was:

Upon thyself thy thought, that thou shouldst depart.

In other words, “I thought to myself: ‘I’d better scram.’” Shakespeare is rolling over in his grave right now.

I see this as a 21st century version of giving 100 monkeys typewriters and infinite time to reproduce Shakespeare’s work. But, I don’t have 100 monkeys, and typewriters are inefficient. I’ll stick with the Pi.

creative commons licensed ( BY ) flickr photo shared by Matiluba

I’m not expecting a ton of followers, and I’m not even sure I’ll leave the account active for any significant period of time. There is a lot of optimization I could do in the code, but I’m just exploring at this point. I’m not planning on posting the script, but if you want to see it, leave a note in the comments and I’ll get a link up.