A word about ‘cheating’, please.

A word about “cheating”, please.

I joined the #EdTech world almost 5 years ago. Fresh from nearly a decade of digital advertising and marketing automation, I was thrilled to have the chance to do something meaningful and mission-driven – to work on products that would change lives for the better.

My first few years at Reading Plus were about supporting and refining our flagship product, a silent reading intervention used by thousands of students around the world to improve their visual-motors skills, vocabulary, and fluency. Then, as now, we were growing steadily, adding districts, sites, and users at a good clip. We experienced the usual pains of expansion – having to scale while keeping up with feature requests.

Occasionally, we would learn of an exploit that students in the field had discovered – some way to game the system, to “complete” their assignments without actually performing the work as it was designed. We would patch our application as needed, engaged in a digital arms race with our product and our end users.

In the fall of 2013 we released a new version of our product, version 4.0, which fulfilled three major goals for our organization: a new plugin-free client interface, an overhauled content library with engaging, leveled stories, and a benchmark assessment to place and track student progress. All of this was built in an extensible framework, with a strong pedagogical basis that was aligned with Common Core. Reports from the field indicated that users enjoyed the new story selections, the fresh interface, and the overall program experience. And there was much rejoicing.

Since the release, our work has been refining and adding features, scaling our infrastructure, and yes – learning about and patching exploits that students have discovered to get around their assigned responsibility to complete their number of assignments per week. On social media, we keep tabs as students refer to various ways that they believe can game the system to get around doing the work as it was intended.

We are not unique as an educational publisher and provider, or even as part of the American Education landscape. Cheating is everywhere, from Harvard to Stuyvesant High School. With the ever-increasing amount of technology utilized for the purpose of instruction and assessment, the issue continues to grow. The reasons for cheating vary, but the reactions are generally the same – reprimands and ever more stringent countermeasures to prevent future occurrences. Welcome to The Educational Arms Race.

As this year has progressed and our team has played cat-and-mouse with users and their cheats, I have come to shift my thinking about “cheating” and our responses to it. I propose that we step back from accusing and classifying the actions of students as such, and instead use this information, learn from it, and improve our engagement of students.

Three years ago, when we first started work on the initial iteration of our product, we created a pair of pragmatic personas to represent students who would use our application. Our examples were Krystyn, an “at-risk” student, and Paul, an “unmotivated” student. By building out these personas we identified pain points, desired features, and general characteristics of the software we were building. We also built empathy within the team for our users themselves. We focused on building an application that would improve the reading comprehension of users by building their efficiency, capacity and building motivation through aspects of the design. For users that are stimulated by the motivational components of the experience, and inspired and supported by their educators, it builds better readers (see my previous post re: Ron Clark Academy). But when our motivational system runs up against some of the realities of student life, it can present a problem. This manifests as students doing anything other than what the program expects of them.

Student exploits in our system take one of three main forms:

– altering of the interface markup (a.k.a. graffiti),

– seeking to gain an advantage to improve their scores, and

– paying or allowing other students or relatives to do their work.

Our application interface is built in HTML5, JavaScript, and CSS. We built using these technologies to work within the parameters of the open web, and to make the program run on as many platforms as possible. Because of this, the markup can be modified by anyone with sufficient knowledge of web technology and a browser with dev tools. For instance, I can modify the EdTech Review navigation using the Chrome Developer tools and take a screenshot:

edtechgraffiti

This tactic is mostly for fun, a way for students to alter the interface, take a screenshot, and show it to their instructor, as a way of pointing out that our app is “broken”. Or a way to change their name on their home screen to Beyonce. Yes, that’s a thing, apparently.

In terms of the second point, students had sought ways to have the interface give hints for correct or incorrect responses to items. Our original design called for making the interface snappy by optimistically using Ajax (Asynchronous JavaScript) to confirm answer responses with a minimum of server interaction. A quick-fingered user could, with luck, gain a clue to a correct answer in some cases.   We have had to modify the application to close off these strategies, but at the cost of some of the interface’s responsiveness.

The third issue – the “stand-in”, is more complicated. I can see how technologies like facial recognition could be utilized to verify the identity of the user. The use of centralized authentication systems and Single Sign On will make students less likely to give away their credentials to another student to do their work. As we work to make our system more adaptive, it will eventually be possible to figure out if the person driving is actually the authentic user. No more punching someone else’s timecard.

Cheating a system undermines the validity of the work it measures, while simultaneously hurting users who are engaged in honest work on themselves. This is the most troubling aspect of the situation – some students diligently working to complete their assignments while their classmates circumvent those same responsibilities.

Calling something a “cheat” is pejorative – it implies something negative about the person doing it. But else how are we to classify the actions of some who are determined to go around the intentions of the program? Even when in some cases, the task of performing the exploit is actually *more* complicated than simply performing the task?

What if instead of calling these students “cheaters”, we used something with more sizzle, like “disruptors”? They are thinking outside the rules, working beyond the constraints of the system. What can we learn from their actions and patterns? Could we find a way to make what was a “cheat” into a learning experience?

What if cracking open the HTML using a developer tool revealed an Easter egg in the source code, like a short blurb about Tim Berners-Lee, and a way to answer a quick question to win a secret badge?

There are ways to use “cheating” as the proverbial ‘teaching moment’ – we as EdTech innovators need to discover and create ways, and I suspect that the students can help us. And that would be the best anti-cheating strategy of all.


Posted

in

by

Tags: