Planning the AR Study

Some initial thoughts on my action research design as I get ready to write up the study methods and timeline:

  • Since I already have data to look through, I'm starting to focus in on a mixed method study, looking at past data and teacher feedback to plan out future sessions for comparison.
  • Since we have data to start with, I'm planning on an exploratory mixed-method design.
  • I think exploratory is more beneficial in the long run because I'm interested in mechanisms and structures which increase implementation of ideas by teachers, not just explaining why they do or don't implement.
  • We're finishing workshops this year and already planning for summer work. If I can identify some patterns and structures and correlate the level of implementation, we'll have a good starting point for aligning all PD, not just my teams, to the new structures using data-backed conclusions.
  • Given the timeframe, gathering consent forms right now is difficult, considering we're coming up on spring break and the testing windows. Doing aggregate, anonymized data analysis will allow us to draft a descriptive letter before the summer PD series begins and we can make informed consent a part of the workshop instead of a mass email.

Revising the Question…Again

I started a series of professional development workshops with teachers this week. It's a series of half-day work sessions with full departments and I'm focusing on active learning and assessment techniques all centered on literacy within the content area. It's really a part two to a full-day conference we held for teachers earlier this month and my task (and goal) is to make sure teachers are equipped with the how after hearing the why at the kickoff. 

My original question was framed as a negative: Why don't teachers implement learning from professional development? I think this has an inherent bias, assuming that teachers don't try to use what they've learned. Based on my work this week (and looking ahead), there is definitely a desire to do things and it seemed that the lack of planning time with colleagues was a bigger cause of inaction than not trying.

I'm going to adjust my question: How can my role effect change through professional development?

I want to move away from what other people do to how I can help impact their habits through strong professional development. I'm still not thrilled with the wording, but I'm interested in what structural components make a program effective when it comes to implementing ideas. To start, I brainstormed some gut feeling indicators and questions that (I hope) will guide some of my research.

  • Relationships: I know my teachers and they trust me and my instruction.
  • Instructional focus: Everything I do has an instructional lens or context. I do not rely on technology gimmicks to increase buy in.
  • Application: All of my workshops bring a heavy focus on in-the-classroom application of ideas through modeling or case study examples.

Some other related questions:

  • How does continuity of study (ie, a PD sequence rather than a one-off workshop) affect implementation?
  • Is there an ideal timing? How often (in a series) seems to be effective?
  • What does the interim look like in between workshops?
  • Are volunteers more likely to implement training? Or are groups, even if they're elected to come by leadership?
  • How does the group dynamic affect buy in or implementation after the fact? Would establishing norms at the outset remove stigma?

The featured image is IMG_6750, a flickr photo by classroomcamera shared under a Creative Commons (BY) license

The Why Loops

I spent some time last week running through some "why" loops to hone in on reasons behind my potential research question. I think the question is broad enough to allow for several avenues of exploration, but it was insightful to run through the cycle several times (below). We've actually used this mechanism as an instructional coaching team in the past and being familiar with the process helped me focus on larger issues. Granted, some of the issues contributing to some of the behaviors we see are well beyond my specific purview and definitely outside the scope of my AR project.

Below is a straight copy/paste of my brainstorming. I think items two and three are most within my realm of influence. I can use my time to focus on teachers who have recently participated in PD to help provide that instructional support. I can also work proactively with principals, helping them follow up with their staff members learning new methods or techniques and recognizing those either with informal pop-ins to see students in action or public recognition in front of their staffmates.

Why don’t teachers implement the training they’ve received in PD?

  1. Teachers don’t put their training into practice   
    • There are good ideas presented, but no time to work on building their own versions.   
    • The PD was focused on the why, not enough on the how   
    • Teachers don’t understand why they need to change practice   
    • The district’s communication about the offered PD is lacking clarity   
    • There is a lack of leadership when it comes to instructional vision.
  2. Teachers do now show evidence of putting training to use with students.   
    • Teachers don’t know how to implement ideas they’ve learned in the workshop   
    • There are so many demands on their time, planning new lessons falls to the back burner   
    • In-building support systems are lacking   
    • The district is strapped for money and hiring instructional coaches isn’t a priority.
  3. Teachers do not put learning from PD into practice.   
    • There is no outside pressure to implement ideas learned in training   
    • Principals are spread too thin to pay close attention to inservice teachers are attending   
    • Principals do not know what to look for after teachers attend inservice.   
    • Teacher evaluations are based on outdated expectations and promote superficial expectations.
  4. Teachers do not communicate implementation of learning   
    • Workshops in the district are often standalone with no formal structure for long term support   
    • The resources committed to PD for several years were focused on one-off training   
    • The district lacked a vision for teacher development as a continual process   
    • District leadership did not see the value of instructional support as a formal position in the district.
  5. Teachers do not implement learning from workshops   
    • No one follows up on the learning from the PD   
    • There was no formal method for recognizing PD   
    • There is no formal expectation of implementation from supervisors (principals, etc)

"Loop" by maldoit https://flickr.com/photos/maldoit/265859956 is licensed under CC BY-NC-ND

Deconstructing Hattie’s Effect Sizes

When taking the necessary in-depth look at Visible Learning with the eye of an expert, we find not a mighty castle but a fragile house of cards that quickly falls apart.

Source: HOW TO ENGAGE IN PSEUDOSCIENCE WITH REAL DATA: A CRITICISM OF JOHN HATTIE’S ARGUMENTS IN VISIBLE LEARNING FROM THE PERSPECTIVE OF A STATISTICIAN | Bergeron | McGill Journal of Education / Revue des sciences de l'éducation de McGill

Hattie's effect sizes are often thrown around as catch-all measurements of classroom methods. This reminds me of the learning styles discussions from several years ago. Both of these approaches have the same critical danger: reducing teaching and habits to single styles or single measures of effect is bad practice.

The idea of learning styles or effects on instruction are fine, but not when presented as scientific fact. A statistical breakdown of Hattie's effect sizes shows the clearly, as evidenced by this line:

Basically, Hattie computes averages that do not make any sense. A classic example of this type of average is: if my head is in the oven and my feet are in the freezer, on average, I’m comfortably warm.

Aggregating each category into a single effect size calculation disregards all of the other confounding variables present in a given population or individual. Learning styles has the same reductionist problem. In the mornings, reading works better for me. By the end of the day, I'm using YouTube tutorial videos for quick information. The style changes given the context and the idea of a single, best style ignores those context clues.

Use descriptors and measurements with care. Recognize the deficiencies and adjust for context as needed.

A Case for Better Course Design

Campus Technology published an article last week about a biomed course that saw mixed results from flipped instruction. The full article is open access (CC-BY 4.0) and available to read. I’ve read and annotated the original article and I’m going to distill a couple of points from bot the published report and the CT article.

The Report

The authors state right up front that there “were no statistically significant differences in examination scores or students’ assessment of the course between 2015 (traditional) and 2016 (flipped).” Campus Technology (and other publications) often latch on to the grade implications rather than qualitative student feedback on the efficacy of flipping. To the researchers’ credits, they do recognize higher retention and application as reported by students on self-reported feedback surveys.

The biggest red flag for me was in the definition of flipping. As Robert Talbert regularly points out, many research articles limit flipping to “video at home, discussion in class.” The article elaborated on the at home experience in the methods section. From the article,

Students were introduced to new material each week by completing assigned readings from textbooks and journal articles, then by watching recorded lectures given by faculty experts at MSPH on one of 10 core epidemiology topics. Next, students completed short online graded assessments of their understanding of the new concepts presented in these media based on the Just-in-Time Teaching (JiTT) pedagogy…

Students were also able to submit questions to instructors prior to the in-person meeting that would be addressed at the start of the session. The article also makes note that doctoral students and instructors would monitor questions via email or office hours in between in-person meetings.

So, students watched a lecture (no discussion on the format, length, or content of the lecture), read some articles, and then began to apply material in preparation for the lecture. More on this later.

Students reported confidence in their learning and ability to apply materials with a slight increase in the flipped (84.1%) vs traditional (80.6%) cohorts (non-statistically significant, however).

Campus Technology’s Interpretation

The opening sentence proclaims:

A study at Columbia University’s Mailman School of Public Health found that in a health science course following the flipped classroom model, there was no statistically significant differences in test scores or students’ assessments of their course, compared to a traditional lecture course.

They do not note that the study took place over two years (two different groups of students) but did report positive impacts due to freedom to watch lectures when they wanted to (improved flexibility). CT also included an insightful quote from one of the authors about the lack of time to process information in a traditional setting after a lecture (discussion was immediately after lecture in the traditional design) but that flipping doesn’t allow for “[direct engagement] with the lecturers”

The Bigger Picture

The research study and the ensuing report highlight two things for me:

  1. Grades are often the motivating factor when flipped classrooms are studied which limits discussion of student impact and,
  2. the perceived importance of course design is negligible when studies are conducted or reported.

Students reported a higher satisfaction with the class due to flexibility and because they felt more confidence in the material. Time to process information is important and they were better able to contribute to discussions after having time to think through the lecture. But, all the CT article focused on was the grade. It isn’t a secret that few practitioners (K-12 or higher ed) actually read the reports unless they’re actively planning their own study. There is a responsibility for news outlets and blogs to include gains beyond the final exam score.

How did students grow beyond the test? What improvements did instructors see in the cohort? These are important factors that should be included in followup interviews if not in the research report itself. The research did have the six instructors full out surveys, but they were not reported in the results with student feedback.

Secondly, course design is critical if we want to improve student performance. Several of the citations were quite old (early to mid 2000’s) and were in a similar vein, looking at student exam scores rather than course design and teaching methodology (granted, several of the cited articles were paywalled so I couldn’t do a full evaluation of each).

If we simply bottle courses and reverse the time of interaction, why would we have an expectation of student improvement on exams? This article shows that the course is consistent, if nothing else, with no change in student exam performance. How would it have changed if students had explored material before the lecture, as in Ramsey Musallam’s or Dan Meyer’s work? How would students have benefitted from interactive items at the beginning of the discussion period rather than a rehash of the lecture from the instructor?

While the research makes some interesting points, it is far from conclusive in its results on the efficacy of flipping. The authors make conciliations at the end, but we need to continue to push the discussion away from a particular technology solution and start by analyzing our instruction methods as the real turning point in student learning.


Featured image is Lecture Hall, Chairs flickr photo by Dustpuppy72 shared under a Creative Commons (BY-NC-ND) license