What does the research say about flipped learning?
Last month I had the opportunity to visit Indiana University, to talk about flipped learning with their Center for Innovative Teaching and Learning. It was a great time, and I thoroughly enjoyed working with the CITL and faculty there — one of the hardest working and most active teaching/learning centers I’ve seen, and a really engaged group of faculty.
The main reason I was there was to give a talk on research on flipped learning – mainly addressing the question, What does the research say about the effectiveness of flipped learning? It’s an important question, and one that I get asked on a regular basis in email and social media. I did a high-level overview of this question for my book, but as I detailed here and recently updated here, flipped learning research is expanding at an incredible pace, and a book written mostly in 2016 is going to be somewhat out of date by now. So I appreciate the chance to revisit that question.
First of all, here are the slides from the talk:
Background
At the beginning of my talk, I mentioned that to understand flipped learning research, you first have to have a sound definition of flipped learning; and to do that, you have to examine where flipped learning came from and what problems it was invented to address. I’m not going to go through the history part in this post, although it’s interesting. Suffice to say that flipped learning first appeared as an organized concept, as opposed to something an individual professor did and kept to himself, in the late 1990s with Eric Mazur’s work with peer instruction1, and then 4-5 years later three other groups — Lage, Platt, and Treglia at Miami (Ohio) University, J. Wesley Baker at Cedarville University, and the SCALE-UP team at NC State — all re-discovered flipped learning independently of each other and of Mazur’s work. (The silo effect strikes again.)
The reason I mention this is (1) the early instances of flipped learning motivate the definition that I use for this concept, which is then useful for determining when something is “flipped learning” research and when it’s not; and (2) all of these early pioneers, except for Baker, published their own research on the results, and this constitutes the small, early body of literature on this subject.
By “research” in this article and elsewhere, I mean peer reviewed publications that methodically explore a research question. This includes peer-reviewed journal articles, books, and conference proceedings that attempt to explore a question about the effects of a flipped learning intervention on something. It does not include things like blog posts, op-eds, or articles that appear in non-peer reviewed publications.
I talked with Mazur about his early work with peer instruction when I was writing my book, and he agrees that peer instruction these days would be considered a kind of flipped learning, although he himself never used that term or anything like it in the large body of peer instruction research that developed during the 2000’s. I’ve chosen not to include peer instruction research with flipped learning research for that reason; peer instruction is so much its own thing, that the research done on PI in my view ought to be viewed as separate from research on flipped learning that is not peer instruction.
As for the others, Lage and others published a couple of articles on their work in the early 2000s (Lage, Platt, & Treglia 2000; Lage & Platt, 2000) that mainly look at qualitative measures — they surveyed students on their experiences in the model and reported back the results. The SCALE-UP group did their own research later that decade which I summarized here and was more quantitative and methodologically rigorous, using quasi-experiments to compare student outcomes between SCALE-UP sections and traditional sections. Baker, while not ever publishing data on his work, did a lot of conference talks and workshops (e.g. Baker, 2000), and it was in those public appearances that the term “classroom flip” was first coined.
What is current flipped learning research like?
Like I said above, flipped learning research has since 2012 been growing at an exponential pace — to be precise, 61% per year according to my model of data through 2017. So there’s a lot of it, which makes it hard to pinpoint any one paper as being exceptionally significant. From my vantage point as a person who samples the research output and peer-reviews the raw input on a regular basis, the current body of literature has some overall characteristics:
- It is mostly done by faculty members (usually individuals, sometimes groups of 2-4) who are using flipped learning in their own teaching, collecting some data, and reporting back the results. On balance, this a very good thing. It means that flipped learning research is never far removed from actual classroom practice, and there is a very wide variety of disciplines, courses, course levels, institutional types, and so on represented.
- However it can also be kind of a bad thing, because very often in my experience the authors of these research articles don’t have the best training in doing educational research and end up introducing methdological flaws into their work that cloud the results. For example, a common mistake is to have an implicit bias in favor of quanitative techniques and against qualitative techniques — using a quantitative method because it’s familiar (and maybe more socially acceptable among our peers) when a qualitative approach would be better. I’ve also seen researchers fail to control for somewhat obvious confounding variables, or try to use their findings (however tepid) to “sell” flipped learning, putting the strongest possible pro-flipped spin on results rather than being objective and scientific about them.
- Current flipped learning research also tends to be quite limited in its scope, precisely because it’s being done by one professor in their sections (usually just one or two sections), which makes generalizing the results tricky.
- Insofar as the research is quantitative, it tends to measure metrics native to the class: course grades, final exam grades, clicker question response rates, and so on. Insofar as it’s qualitative, it tends to involve homemade surveys given to students at the end of the semester on their preferences about flipped instruction. This is another limitation of the results we see. I don’t think course and exam grades are good proxies for learning, and I’d like to see more use of validated survey instruments rather than home-cooked Google Forms.
In short, generally speaking the research on flipped learning reminds me of rock and roll made by garage bands. It’s fun, has a lot of heart, and has some promising results. It’s also unpolished and some of it is, well, pretty bad. But then again, a significant portion of any research is going to be pretty bad. Despite the fact that flipped learning has been around for about 15 years, the vast majority of research on it is less than three years old, so there’s a lack of a rigorous research framework and major results to use as guides. But, I’m personally willing to live with this for now, given the momentum that the field currently has.
What does the research say about…
The main reason you’re probably reading this post is because you want to know what this wildly-expanding body of work actually says. Again, there’s so much of it right now that it’s hard to point out specific findings. So I am going to lean on two earlier literature reviews: O’Flaherty and Phillips (2015) and Bishop and Verleger (2013)2. These are perhaps the two most commonly cited literature reviews on flipped learning out there. But do bear in mind that a lot has also happened since 2015, and there’s a strong need for newer literature reviews than these.
So, here’s what the flipped learning research (as of 2015) says:
- About measureable student learning outcomes: Here we mean quantitative measures of learning like course grades, exam scores, pre/post test gains on concept inventories, and so on. The vast majority of the papers reviewed show either higher scores than students in traditional settings, or else the differences are not statistically significant. My personal sense is that these two outcomes (greater scores and no significant difference) are about equally common. Also, when flipped learning students outperform traditional students, the effect size is usually fairly modest — the improvements are significant but only rarely do they break through to the levels of something like Richard Hake’s “6000 student” study (Hake, 1998). Only rarely do you see students in flipped learning performing significantly worse; one prominent example of this is Jeremy Strayer’s Ph.D. thesis (another early landmark in flipped learning research). Even then, you can often work backwards in the paper and see some issues with the way that the model was implemented in the classroom; with Strayer for example there was not sufficient support for students as they worked with an intelligent tutoring system.
Pause here to say that a main takeaway from the research is that flipped learning tends to do no harm. I know this is hardly an inspiring, movement-making claim. We’d prefer to have game-changing blockbuster positive results that render the science settled on flipped learning — but we don’t have those, yet. So at this point, I’m happy just to consider this to be a small win.
- About student engagement: That term “engagement” is deceptively slippery; I’m using it to refer to student behaviors that indicate active, intentional involvement with their learning processes. There are many ways to measure this. One such way is class attendance (hard to be engaged with class work if you’re not there) and on that score flipped learning does very well: One of the most consistent results in flipped learning research is that it is strongly correlated with improved class meeting attendance, with one study reporting an increase from 30% attendance to 80% between traditional and flipped sections. Another, far more common issue of engagement is whether students are doing the pre-class activities. On this issue, the results are mixed and widely varied. It seems to depend on implementation. Students tend to complete the pre-class activities and learn from them when the activities are structured, have clear value to the in-class activities, and are integrated into those activities and not just additional busy work. Conversely if the pre-class activities are simply telling students to watch videos and maybe do some exercises, without structure or any kind of feedback loop, those tend not to get done.
- About student preferences and attitudes: A lot of flipped learning research focuses on whether students prefer flipped learning to the traditional approach (and, sometimes, digging deeper to find out exactly what they prefer or don’t prefer and why). The results here are, to be honest, frustrating: Students tend to show higher satisfaction with flipped learning than with traditional methods, especially the focus on active learning, and they like that flipped learning gives them more time for group work, more experience with communicating their ideas, more interaction with their friends, more attention from the instructor, and a heightened sense of ownership and empowerment. But — and here’s the big caveat — these benefits only tend to sink in over time. Students often have highly negative views on flipped learning when it it first introduced; and some students persist with those views throughout the course. Additionally, and to me this is the most frustrating part, students hold these strongly negative beliefs about flipped learning even as they acknowledge the above benefits. In other words, students realize the benefits of flipped learning and even agree that they are benefits; but they still want to go back to traditional lecture anyway.
I’m not sure what to tell you about this, other than as I have said elsewhere, the key to succeeding with flipped learning is communication with students. When teaching with flipped learning, you have to be clear about your expectations for student work and why the class is set up the way it is, persistent in making sure students know what they are supposed to do and why they are doing it, constantly soliciting student feedback and acting on it, unwaveringly extending support to students and helping them succeed. There is no shortcut around this. If you’re thinking about flipped learning and can’t commit to this level of communication with students, put yourself on the one-year plan and think carefully about whether this method is for you.
Continuing on:
- About the use of technology: There don’t seem to be a lot of studies specifically about the use of technology in flipped learning environments. What’s there, tends to say that the type or quantity of technology used in flipped learning courses isn’t as important as simply using technology in the first place to promote student learning. In other words, if there’s technology you can use in the course to make the enhanced in-class active work more fruitful, then use it; but there’s no special combination that makes it maximally effective. I would add, however, from personal experience that you should be careful not to overload students with too much technology because the cognitive load imposed by flipped learning can often be very great for some students.
- About instructors and institutions: There are some interesting studies about how flipped learning affects instructors and the institutions they work for. We’ve already mentioned that implementation is crucial; poorly-designed flipped learning courses will almost always yield bad results. However, the research tends to say that the differences in student outcomes based on different implementations of flipped learning are small in comparison to the differences in outcomes between flipped and traditional classes. That is, the specific way you implement flipped learning is not as important as just the fact that you are flipping in the first place, offloading direct instruction into students’ individual spaces and using the class time for active learning. Also, research confirms common sense when it says that flipped learning incurs major costs in time and effort, especially when first starting up, and as a result instructors need support from institutions (time, money, training, risk abatement in the form of protections in promotion and tenure, etc.).
What the research doesn’t say
What is really striking to me is not so much what the research says, as much as what it doesn’t say. In particular, the research data show no significant differences, either in qualitative or quantitative stiudies, for any of the following situations:
- Whether the course being flipped is introductory vs. advanced
- Whether the course is undergraduate vs. graduate
- Whether the course has a lot of technology in it vs. not a lot
- Whether the course is a small section vs. a large section
- Whether all of the course is flipped vs. only part of it
- Whether videos are used vs. not used
To be fair, some of this is because there is not a lot of research that targets those scenarios in the first place. And there might be newer studies that say something definitive about these. But, as far as the studies here indicate, flipped learning works just as well in any one of these scenarios as in its opposite.
What holes are there in the research right now?
Looking over the current research literature, I can see some gaps where little to no research has been done yet, and so these are great opportunities for others to jump in:
- Flipped learning and accessibility. I’ve not seen any study that looks into how students with special access needs — including physical disabilities like vision impairments as well as technological access issues — experience or cope with flipped learning environments. For example, what about students who are blind and can’t watch a video?
- Flipped learning and students with learning disabilities. Relatedly, it’s not at all clear whether flipped learning is effective with, or even appropriate for, students with learning disabilities like ADHD. On the one hand, a student with ADHD should benefit from being able to control the pacing of direct instruction; on the other hand, controlling that pacing requires high-functioning self-regulation which is an issue for those students. This question is the subject of one of my sabbatical studies and as I reported earlier, there are some interesting and encouraging findings from the world of online and blended learning; but it’s wide open for flipped learning specifically.
- Longitudinal studies. I mentioned that students tend to have strongly negative reactions to flipped learning at first but then often settle down and see the benefits. But do those benefits actually persist after the course is over? Does flipped learning truly develop long-lasting improvements to content mastery or self-regulation skills? We need longitudinal studies for this, and I don’t know that there have been any yet.
- Determining best practices for flipped instructional design. I think we have enough data from flipped learning at this point to start asking what design principles lead to flipped learning materials and course designs that work the best with most students. I’m thinking of my own Guided Practice approach to pre-class work; this seems to work well for me, but are there better models?
Conclusions
My first thought looking over all these results is that we really need to update these for 2018. I have seen some newer literature reviews come across my Google Scholar alerts, done as class projects or M.S. theses but I haven’t read them (yet). Additionally, we could certainly use a major, rigorous meta-analysis of flipped learning research results, like the one done in the PNAS study (Freeman et al., 2014), to see what we have.
My second thought is that flipped learning research is really an awesome thing to watch unfold. It seems different from research in other areas of education: Less jargon-filled, less dry, more practical and in-the-trenches, done by ordinary professors (which has its good and bad points, as I said earlier), and quite often inspiring for my own practice. It feels like something that anybody could jump in and do, and perhaps the big breakthroughs we are looking for will come from somebody reading this post who decided to do just that.
References
Baker, J. W. (2000). The “Classroom Flip”: Using Web course management tools to become the guide by the side. In J. A. Chambers (Ed.), 11th International Conference on College Teaching and Learning (pp. 9–17). Jacksonville, FL.
Bishop, J., & Verleger, M. (2013). The Flipped Classroom : A Survey of the Research. Proceedings of the Annual Conference of the American Society for Engineering Education, 6219. http://doi.org/10.1109/FIE.2013.6684807
Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, 111(23), 8410–8415. http://doi.org/10.1073/pnas.1319030111
Hake, R. R. (1998). Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66, 64–74. Retrieved from http://pdfserv.aip.org/AJPIAS/vol_66/iss_1/64_1.pdf
Lage, M. J., Platt, G. J., & Treglia, M. (2000). Inverting the classroom: A gateway to creating an inclusive learning environment. The Journal of Economic Education, 31(1), 30-43.
Lage, M. J., & Platt, G. (2000). The internet and the inverted classroom. The Journal of Economic Education, 31(1), 11-11.
O’Flaherty, J., & Phillips, C. (2015). The use of flipped classrooms in higher education: A scoping review. Internet and Higher Education, 25, 85–95. http://doi.org/10.1016/j.iheduc.2015.02.002
Strayer, J. (2007). The Effects Of The Classroom Flip On The Learning Environment: A Comparison Of Learning Activity In A Traditional Classroom And A Flip Classroom That Used An Intelligent Tutoring System. Ohio State University. Retrieved from papers2://publication/uuid/20BAE564-198B-4F5D-8ECD-6D36D77E86D1
Image credit: https://pixabay.com/en/survey-opinion-research-voting-fill-1594962/
-
Someone at the IU talk pointed out that the concept of flipped learning showed up even earlier, in the 1998 book Effective Grading: A Tool for Learning and Assessment in College by Barbara Wolvoord and Virginia Johnson Anderson. I mentioned this in my book, actually, and there I also explain why I don’t consider their work an early instance of flipped learning, although of course that book is a classic in its field. ↩
-
Fair warning: I am not a huge fan of Bishop and Verleger because they explicitly state that a course must use video in order to be considered a flipped learning environment. In my view this is a pointless restriction that ignores the historical development (for example, Mazur didn’t use video) and excludes a number of excellent instances of flipped learning (like Lorena Barba’s numerical methods MOOC). I’m including it here because so many others find it useful. ↩
Leave a Comment