Weekly Reflection #6

Photo credits: University of Victoria Education Technology

Are there any important additional considerations to evaluating educational technologies that are not included in the SAMR or Triple-E frameworks?

When I started working through these materials this morning, I had no idea what the SAMR and Triple E frameworks were. I had never heard of them. I now know that when evaluating educational technology, both frameworks help us ask two very different, but also complementary questions. 

First, the SAMR framework asks what is technology doing to a specific task. The SAMR model focuses on how technology changes the structure of a learning task. It describes four distinct levels:

  • Substitution: Technology replaces a traditional tool, with no functional change.
  • Augmentation: Technology replaces a tool, but adds some improvement.
  • Modification: Technology allows significant task redesign.
  • Redefinition: Technology enables tasks that were previously impossible.

When applying these levels to a specific task like writing an essay using Google docs for example, it would look a little like this: 

  • Substitution: Students type an essay in Google Docs instead of handwriting it.
  • Augmentation: Students type in Google Docs and use spell-check and built-in commenting tools for peer feedback.
  • Modification: Students collaborate in real-time on a shared document, providing synchronous feedback and revision.
  • Redefinition: Students publish their writing to a global audience, embed multimedia, hyperlink sources. 

On the other hand, the Triple E framework shifts the focus from the task itself to the actual learner and how much does a specific technology help learning. It asks whether the tool: 

  • Engages: Helps students focus on the learning goal
  • Enhances: Deepens understanding or scaffolds thinking
  • Extends: Connects learning beyond the classroom

When applying this framework and these levels to a specific technology like the HP5 video feature within WordPress, it would look like this: 

  • Engages: Students answer embedded questions while watching instead of passively viewing.
  • Enhances: Immediate feedback clarifies misconceptions and supports deeper understanding.
  • Extends: Students apply learning by creating their own interactive video or connecting concepts to real-world examples.

After reflecting on both frameworks, I believe there are several critical factors missing from these models that educators must consider before adopting educational technologies. First, is the cost in money or is a technological tool worth that specific cost. Neither the SAMR or Triple E frameworks address money, but it is an incredibly important factor. A tool might sit at “redefinition on the SAMR ladder and score high in “enhancement” and “extension” on Triple E, but if it costs thousands of dollars annually, that matters (especially when district and school budget are so thin). You need to take into account things like the cost per each student and costs the new tool might eliminate. It is highly possible that a free or lower-cost tool that scores slightly lower pedagogically may be more practical and equitable in a real school setting. Second, is the cost in time. A tool that looks transformative in theory may fail in practice if it requires excessive professional development or troubleshooting. Teachers are increasingly overworked these days as is, and so not every tool might be practical if it requires hours upon hours to figure out how to use. Third, is data and privacy. In this era of online proliferation, privacy of students should be a number one concern for teachers. This is yet another consideration both frameworks fail to address. Many tools monetize user data, which could compromise student privacy and raise all sorts of ethical concerns. Lastly, is accessibility and inclusion. This is an incredibly important consideration today. A tool might redefine learning and enhance understanding, but if it is not screen compatible, lacks captions and colours, or is hard for certain students to navigate, it’s actually going to be excluding certain students. This is another concern that both frameworks fail to address. 

Please reflect on the H5P instructional video you created a few weeks ago. What are the strengths and weaknesses of your H5P video when evaluating it using the SAMR & Triple-E Frameworks?

A few weeks ago, I created an H5P interactive instructional video through the screen record feature in Zoom, and the HP5 feature in WordPress. The video served to walk viewers through how to navigate the National Hockey League (NHL) website. It was a tutorial showing how to find scores, standings, team information, and other key features, such as fantasy hockey. Looking back at it now through the lenses of the SAMR model and the Triple E Framework, I can better identify both its strengths and its limitations.

When it comes to the SAMR framework, at its most basic level, my video likely sits between augmentation and modification. Some of the video’s strengths included:

  • It was a reusable digital tutorial.
  • Included embedded interactive questions throughout the video.
  • Students could pause, rewind, and rewatch sections as needed.
  • Provided immediate feedback on embedded questions 

These strengths help move the video into the category of augmentation, possibly approaching modification, since the task was redesigned to include built-in formative assessment. It was not just a substitution or recording of a lecture. 

On the other hand, there were also several weaknesses. The task itself did not fundamentally change. Students were still learning how to navigate a website. The technology improved delivery and flexibility, but it did not redefine the learning experience. In sum, the tool enhanced instruction, but it did not transform it. 

To reach the level of redefinition, I could have done many different things including:

  • Asking students to analyze website design and usability.
  • Integrating real-time data comparisons or analytics tasks.

When it comes to the Triple E framework, my video allowed for engage and enhance the learning experience, but it did not really extend it in any way. For enhancing the learning experience, HP5 allowed for the ability to pause and replay, learn at one’s own pace, and receive immediate feedback on questions. For engaging, the interactive questions embedded in H5P helped prevent passive watching. Instead of simply observing the NHL site walkthrough, learners had to respond to prompts.

Extending was where my video was the weakest and needs improvement. The video did not significantly connect classroom learning to real-world application beyond simply navigating the NHL site. To better meet the “extend” criteria, I could have asked students to compare how different sports leagues present data, or had them analyze how statistics influence fan engagement. 

Leave a Reply