Assessing Analytic and Evaluative Skills Using Multiple Choice

Photo by Pixabay on Image description: A black and white photo of a room with seven identical white doors. Large black and white patterned wallpaper covers the wall around the doors.

AAPT “How We Teach” July 14, 2021

Jennifer Szende

In July 2021, I led a virtual workshop session on multiple choice testing in philosophy as part of the AAPT series on “How We Teach”. This is an adapted version of the handout I distributed with the presentation. Multiple choice testing is sometimes dismissed as too easy for students, too open to dishonesty, or too difficult to design for instructors. Here, I give an argument in favour of multiple choice testing, I respond to some concerns, and I offer some tips and best practices resources for effective multiple choice testing in philosophy. Much of what I say here will be relevant to other academic disciplines and other testing scenarios in addition to academic philosophy.

Why choose multiple choice?

There are many good reasons to include multiple choice within a balanced assessment portfolio. I focus on effectiveness, fairness, and efficiency.

Effectiveness: Multiple Choice questions can be an effective way to assess the learner’s ability to recall, understand, apply, analyze, and evaluate. Standardized tests typically use case studies and sight passages to assess student’s understanding, application of concepts, analysis, and evaluation of novel information. So, to an extent, many of us are familiar with multiple choice testing that is designed to assess skills beyond information recall. One frequent objection to those types of tests is that they assess ‘test taking ability’ or ‘familiarity with the test format’ rather than assessing analysis or understanding. Keeping this worry in mind, I have tended to design my tests as open book and without time limit. Open book because I am very happy to build formative assessments that force students to look at the course material in a new light, and give students the opportunity to examine what they find in that new light. If they read a passage for the first time, or re-read it for a subsequent time, in order to answer the question, the test has done its job.

Fairness: Sometimes, assessing students can be unavoidably subjective. For essays and presentations, the bias and subjectivity is mostly located at the stage of marking or assessing student work, with some biases and subjectivity located in the design of the assignment. Rubrics can help standardize the subjectivity across students (thereby increasing fairness), but a level of subjectivity remains. Think of cases where TAs and instructors standardize each other’s ‘A’ paper, ‘B’ paper, and ‘C’ paper, or cases where students appeal a grade by comparing marks and assignments with other students in the class. For multiple choice, the subjectivity of assessment is located at the stage of writing questions, rather than at the stage of marking questions. As a result, the subjectivity and bias are more fairly distributed across all test takers (Loftis 2019). I have taken Rob Loftis’s advice seriously, and have taken to offering students a space in which to explain their answer. I don’t read these explanations for correct answers, but find I am often able to give partial or full credit to students who misunderstood the question but demonstrate understanding of the material, and other times these responses help me to recognize and rectify (with full credit) questions that were unintentionally ambiguous.

Efficiency: A multiple choice test allows you to assess a lot of material in a relatively short assessment, which is furthermore easy to mark. Multiple-choice tests can cover a large scope of material in a relatively small/short test. They are easy to mark, even for large and online courses. See David DiBattista’s argument here. In some cases, the Learning Management System (LMS) or scantron system can be used to automatically mark the test, or mark it pending instructor review and approval. In particular, the cognitive burden of marking is reduced. Reducing the cognitive burden of marking is no small feat, even if much of the cognitive burden shifts to the stage of test design. When I have large (90+ student) classes, these tests allow me to save some of myself for other types of student engagements and assessments. In the case of LMS tests with automatically generated feedback, there is a possibility to give immediate feedback to learners, so that the student gets an explanation of the correct answer.

So, why would philosophers avoid using MC?

It’s too difficult for the instructor! Constructed answer questions are much easier to produce (DiBattista and Kurzawa 2011). The easiest multiple-choice questions to produce assess information recall, and many teachers aren’t interested in assessing information recall. Genuinely challenging, formative multiple-choice questions, especially those that assess understanding, analysis, application, or evaluation, can be challenging and time consuming to write/design.

  • Practice writing questions in a variety of styles, for a range of skills.
  • Use some of the question-writing tips offered here or in the further resources linked below.
  • Pace yourself throughout the term. Write 1-3 questions per week, or per lecture. Schedule time to write questions after each lecture, when the material and discussion are fresh in your mind.

Multiple Choice is too easy for my students, or too low on Bloom’s taxonomy (DiBattista and Kurzawa 2011; DiBattista 2008; Loftis 2019). Many instructors worry that students will just use a search function to find the answers. The solution is to design the test/write questions with this worry in mind.

  • First, ask yourself: “What is the purpose of the test?” Are you assessing whether students have attended lecture/read the material? Whether they have understood the material? Whether they can apply a concept to a novel situation? It might turn out that you want to assess information recall in a particular instance. But, if so, it might be an appropriate occasion on which to set a time limit (with appropriate extensions for students who need it), or it might work best for an in-class test. If, however, you want to assess understanding, analysis, or application, remove the time limit and design questions to be open book. Invite students to take the time to look up the answers. You may wish to use paraphrasing to avoid searchable terms. Alternatively, you may actually choose to have your students look it up, perhaps using a search function. If they haven’t reviewed the material very closely yet, maybe the test is a good way to get them to read key passages.
  • MC can be formative, medium to high on Bloom’s taxonomy, and can provide a valid measure of student achievement.
  • Skills that can be tested with MC: Recall; Understanding; Apply; Analyze; Evaluate? (Loftis)

Academic dishonesty. Lots of worries arise on teaching forums about students paying someone else to write the test, working together, or copying each other. If that is your worry, design with it in mind. But also, learn a bit more about triggers of academic dishonesty, and try to design your evaluation to avoid these.

  • Again, consider: ‘What is the purpose of the test?’ Choose an appropriate assessment strategy for the thing being tested. Multiple choice tests can be formative, and the purpose of testing might be to familiarize students with key concepts. The process of looking up the answer and reading through the questions might be exactly what you want to test. Consider providing a provision and permission for students to work on these questions together, such as an unmarked fill in the blank ‘I worked on this test with the following person/people…. ‘.
  • Use low stakes multiple choice testing. Frequent (open book?) tests worth 2-5% with the lowest marks dropped are less likely to lead students to feel under pressure than one-time exams worth 30-40%.
  • Use randomization. Learning Management Software such as D2L/Brightspace, Blackboard, or Canvas allow multiple forms of randomization in testing. Build question ‘pools’ or ‘banks’ with a larger number of questions on each topic than will appear on the test. The LMS will randomly generate a set of questions, and will randomize the order that they appear in for each student (within parameters set by the instructor or test designer). The LMS can even randomize the order that the options appear in within each multiple choice question, which encourages closer reading of the question.
  • Consider using untimed test and/or open book tests. Design a test that will require looking up (some? Most?) answers, and give students time and permission to do so. If the test is designed to be open book, looking up the answer will not constitute cheating.

General best practices for Multiple Choice:

Some MC question writing strategies:

What follows are a few question-writing strategies that I have used in the past to generate questions. I keep this list handy when I am trying to generate 1-2 questions each week based on the discussion. I review my lecture notes or power points, any examples discussed in class – especially those raised by students – and try to write a question stem and the correct answer before generating distractor responses also based on lecture, discussion, or written material.

  1. Paraphrase, and use the paraphrase rather than quotations in the stem or the multiple choice options:
    1. Paraphrase the thesis of an article.
    2. Paraphrase definitions for key terms.
    3. Paraphrase key objections.
  2. Use key terms and key concepts in multiple-choice, but try to use them in novel situations/case studies/ examples.
  3. Use comparisons/contrasts/lists drawn from course material or discussions.
  4. What does the example show?
    1. Example from the reading: What point is Author making when they use X?
    2. Example from the news/ film/ popular culture: What would Author say about X?
    3. Example from the news/film/popular culture: Which Author would make which of the following claims?
  5. Who (which Author) would agree with [paraphrase]?
  6. How might Author A respond to Author B’s question/quote/example/concern?
  7. Author A and Author B agree about X.
    1. True or False?
    2. Which reason would each give for X?

Leave a comment

Filed under Instructional Design

Yet another biased algorithm

Image of two pencils and a notebook on a wooden surface. Photo by Skitterphoto on

Every year in England and Wales, there is some sort of A-level controversy. Some years, controversy has arisen when too many students take supposedly “easy” A-levels. In other years, the concern has been that too many students are achieving top grades. These exams are the culmination of a student’s secondary education, and are designed to comprehensively evaluate students on two years of work. They are taken to be a demonstration of skills and knowledge. They also allow universities and future employers to compare (and rank) students and schools.

The news story related to A-levels has rarely been explicitly about bias, even when it has often been implicitly about socio-economic or gender ‘patterns’ appearing in exam results. (“Look! A pattern! How did that get there?”) This year’s controversy, on the other hand, wears its bias on its sleeve.

In 2020, for Covid-19 related reasons, the written exams were cancelled. Instead, the government opted to use an algorithm that purported to calculate how a student would have (might have? could have?) done on their exams. Grades were ‘assigned’ according to an algorithm that used 2017-2019 historic grade distribution at a particular school, the entering grades (GCSE) of the particular class, and (in order to bridge the two) a calculation of how well historic entering grades (GCSE) correlated with historic grade distribution in the 2017-2019 exams.

It turned out that the algorithm adjusted 40% of grades downwards, and that the top students from state-run schools in deprived areas were disproportionately affected by downgrading. Eventually, the algorithm-generated marks were overturned, and the use of the algorithm itself is being challenged as discriminatory, but not without leaving its mark. Of course, the discriminatory pattern was only discovered after the algorithm’s results had been published. At that point, the university admission process proceeded to secondary admissions (through a process called ‘clearing’). A lot of damage was done in a very short period of time as a result of lapsed judgment.

That the algorithm would turn out to be unfair was predictable, and that is part of the basis of the political furore. Ministers and high-level civil servants have resigned or been fired, at least partly because there is evidence that they should have or could have predicted the unfairness: two of the components in the algorithm essentially calculate a student’s mark based on how other students – in previous years – have performed on their exams. As my 6 year old will tell you, it is a basic forms of unfairness to give someone credit or blame for what someone else has done. Even the algorithm component that tied the estimated grade to the particular student used GCSE results, so that how a student had done on another unrelated exam 2 years previously held a significant sway on how the algorithm predicted (suggested?) that they would have (might have? could have?) done on an exam that didn’t, in fact, take place. All of this suggests that anyone who stopped to think about the components of the algorithm could have anticipated the results being unfair.

That the algorithm might turn out to exhibit patterns of bias, however, required a bit more insight and understanding of how these exam results normally work. But it, too, was predictable to anyone familiar with historic and recent patterns of bias in the exam results.

A-level results are fundamental to the university admission process. They are seen as a meritocratic – and therefore neutral, unbiased – ranking system. Some offers of university admissions are conditional on receiving certain results, and conditional admissions are revoked if a certain grade isn’t achieved, at least in a non-pandemic year. So, public outcry number one occurred as soon as it was revealed that some of the published results had been the result of downgrading by an algorithm: many students had their university offers of admission withdrawn, and the spots were offered to students on a wait list, who promptly accepted. Within a matter of hours, the university places were no longer available to the original student whose results had been downgraded by the algorithm. Again, the students whose results were downgraded and whose offers of university admission were rescinded were disproportionately from state-run schools in deprived areas. This was a tangible loss resulting from the use of the algorithm, and it had a disproportionate effect on disadvantaged students. Universities, exam boards, schools, and the governing body for the exam process have been scrambling to repair the damage. Many are hopeful that they will arrive at a solution through a combination of deferred university admissions and extending additional offers of university admissions. But since A-levels have always been used to rank students for university admissions, there is an extent to which the use of the algorithm was an attempt to preserve the ranking function of A-levels in the wake of the cancelled exams. Yet, the algorithm preserved more than that; it also preserved the socio-economic patterns of the outcomes.

In addition to undergraduate admissions processes, A-levels are also used as part of a postgraduate university admissions processes. How you did in an undergraduate degree might be weighed against (or outweighed by) how you had previously done in your A-levels before them.

A-levels are also used as part of many job applications including professional job applications throughout a career. How you did on your A-levels might be taken into consideration when you apply for a job as Barrister, University Lecturer, architect or medical doctor, notwithstanding the fact that all of these jobs require you to have completed further and additional qualifications beyond your A-level results.

These various uses of A-levels reveal that the exams serve a gatekeeping function. If you don’t achieve the right set of marks, the gate will remain closed. Only those who pass a certain threshold will be allowed to pass through the gate of university admissions, or of postgraduate admissions, or of professional qualifications.

My worry is that the effects of A-level bias have broader and longer lasting implications than the annual controversy suggests. A-levels are taken to demonstrate more than just a snapshot of a student’s skills and knowledge on a particular subject. They are interpreted as an objective merit-based ranking of students’ abilities. They are treated as having predictive qualities. The suggestion is that student who achieve high marks are not merely good at taking tests; rather, high achievers at A-levels possess “wisdom” or “knowledge” or “understanding” or even “genius“. These marks are used to compare and rank students or job applicants, and to determine who falls above (and who below) various thresholds. And so, the test’s gatekeeping function persists well beyond university admissions week.

A-level results are sticky. Via job applications, they can stay with an individual for decades. As an academic with a PhD who occasionally applies to jobs in the UK, I have filled out job applications that still ask about my (non-existent) A-level results. What could A-level results possibly reveal about someone applying for a position as a university lecturer? Well, in my case, they reveal that I am a foreigner, since I didn’t complete any A-levels. In Brexit Britain that is no small thing to have to admit at the start of an application process.

In more typical domestic cases, however, A-levels in job applications subtly reveal exactly what the algorithm controversy is about: they reveal class and social markers by naming where A-levels were completed. This is true of the various Old Etonians in government, for example. A-level results might, in this subtle way, reveal geographic or class origins; at least, they tend to reveal whether a student attended a state-school in a working class neighbourhood or a £30,000 private school. And in either case, the implicit information could and likely would taint the ‘neutral’ merit ranking of using A-levels in hiring decisions.

But, implicit bias itself is sticky, so, once a hiring manager or admissions officer (or a hiring manager’s algorithm or AI) knows where you completed your schooling, the neutral merit ranking can also become tarnished by prestige and other accompanying forms of bias.

Barocas and Selbst point out that AI or Data Mining programs in the employment context “tend to assign enormous weight to the reputation of the college or university from which an applicant has graduated, even though such reputations may communicate very little about the applicant’s job-related skills and competencies. If equally competent members of protected classes happen to graduate from these colleges or universities at disproportionately low rates, decisions that turn on the credentials conferred by these schools, rather than some more specific qualities that more accurately sort individuals, will incorrectly and systematically discount these individuals” (Barocas & Selbst 689).

Although the A-level algorithm was a simple (non-AI) algorithm rather than machine learning, many of the concerns that I’ve previously raised about bias in machine learning are present in this case involving a simple algorithm. AI typically ‘learns’ any patterns present in existing data, whether or not the pattern is obvious to the programmers. This algorithm was just a little more explicit – and transparent – about using historic data’s predictive implications to substitute for current judgment.

One lesson that we might draw from the controversy is the reminder that algorithms do exactly what we ask them to do. If there are embedded assumptions in our programming, or if there are embedded biases in the data we feed them, the output will retain those biases. In this case, though, we might also remember that we can use the outputs of an algorithm more and less responsibly. Since the algorithm wears its bias on its sleeve, I am hopeful that the 2020 A-level results will be taken with a grain of salt. But an even better outcome would be a deep reckoning with the uses and purposes served by asking for A-level results.

Leave a comment

Filed under Uncategorized

Waiting to exhale


Image of brightly coloured pencils, paints, stamps, paper, scissors. Image source: Pexels/Pixabay.

In early July, our daycare announced that it would be reopening at the end of the month.

Back in June, the daycare had polled parents about priorities for reopening, and about whether we would send our kids back should a spot become available. I wrote, at the time, that the circumstances under which the staff felt safe to return would be the circumstances under which we would be happy to send our kids back, provided there was a space for our kids. My partner wrote to tell them that we would not want to take a spot away from a front line worker, nor from a child whose family needed it more than we did.

The daycare listened to our concerns. After a few weeks of planning and reorganizing the space for physical distancing, they contacted us again. They emphasized that the health and safety of both children and staff were their utmost concern. They would be reducing the number of places in the daycare by more than half. There would be 8 kids with 2 staff members in each age cohort. The daycare prioritized spaces for children of front line workers; the next set of spaces would be prioritized based on need; and they would hold a lottery for remaining spots.

They would be removing all carpets and soft or porous toys, and covering all couches with plastic sheeting for ease of cleaning. They would seat children at tables with plexiglass dividers for meal and snack times, and set out individual rather than shared art materials or crafts.

There would be no singing. No mixing of cohorts. No sharing of snacks. No hugs. Obviously.

We got the email. They had a spot for each of our kids! Our kids would be returning to daycare. We would be returning to some semblance of our old life. I took a deeper breath.

Perhaps I could start applying for jobs again? Maybe do a little bit of writing? Hopefully, working in August wouldn’t be the constant stream of interruptions that it has been since March 13. Perhaps our respective careers would, after all, survive the pandemic. Or, maybe at least one of our careers would. So many possibilities seemed to open back up with that one email.

We thought about what it might mean for our kids. A chance to see and play with kids their own age, notwithstanding the ban on hugs. Or the ban on sharing toys. Our kids’ bedtime might revert to a quasi normal time! Fewer tearful bedtimes? Fewer tantrums. Fewer days spent in the blue or red zone. A chance to spend some time in the care of a trained professional who cares for children, but isn’t as invested in everything as a parent. A chance to try to balance work and life once again.

Since March, I now realized, I have been holding my breath. Waiting for something to change. At some point, I noticed that I had stopped even checking the weather forecast. Because, what would be the point? We take it one day at a time. One tantrum at a time. One book at a time. One bout of despair at a time. One crisis at a time. Living in the moment. But, also: stuck in the moment.

I exhaled. It felt good.

Naturally, we started to plan, just a little.

My unexpected reaction to the daycare reopening was to feel the weeks and months ahead opening up to possibility. I started to look forward, after months of looking down. Of being stuck in one place. Of watching my feet, glued to the ground.

Many philosophies – ranging from Aristotelian accounts of Practical Wisdom, through Kantian accounts of Good Action to Existentialist and Libertarian (opposing) accounts of the True Meaning of Freedom – suggest that some portion of the meaning of life is to make choices, develop a direction, and plot a route from here to there. Making a plan is part of having a Life. A good life requires good choices, good aims. We live a meaningful life by working towards meaningful goals; a meaningless life by working towards superficial goals, or by not working towards goals at all. First, we consider our options; next, we set goals; we make appropriate plans for achieving them; we follow through. Completed goals or “achievements” may be considered and evaluated in an attempt to answer the question “How is my life going?” We judge ourselves, and are judged by others, through an accounting of the quality of these choices.

Many modern philosophers add a condition about living an authentic and self-directed life. Externally imposed goals, even if fulfilled, don’t count as “achievements” or don’t count in the same way. Death and taxes are not goals. For most of us, they are not what makes our lives meaningful. External constraints, such as the pandemic, don’t change the underlying calculus. We have to do our own choosing, bounded by whatever constraints the world imposes on us. We have to act for ourselves.

In her book Self, Society, and Personal Choice, Diana Tietjens Meyers nicely explains the connection between living authentically and living autonomous life. Meyers explains: “to be in control of one’s life is … to live in harmony with one’s true – one’s authentic – self.” (19). In the pandemic, I am not in control of my life. None of us are. But also – as a result – perhaps I am not being true to myself? I am losing my concept of who I am, of who I want to be. Stuck in the moment is also stuck in tension with one’s true self.

Although I continue to set goals, my ability to work towards them is extremely limited. In the pandemic context, I am not in control of my life plan. I am not even in control of my daily plan. The pandemic – and pandemic parenting in particular – precludes so many different types of planning. The constant stream of interruptions? It is so much harder on me than I recognize on a daily basis.

Myers continues, “Completing a part of one’s life plan does not simply add an item to a person’s roster of accomplishments; fulfilling a particular plan insinuates itself into the individual’s personality by weakening or reinforcing some of the individual’s traits, by modifying the relations among them, or by engendering new ones.” (60) I am no longer the type of person who moves forward. I am stuck, and this is becoming Who I Am. Holding my breath. Waiting. Looking down.

Part of my pandemic problem, then, is that my life plans – and my ability to fulfill them – has been taken completely out of my control. We are collectively struggling about whether, when, or how to reopen schools and universities. (Not to mention bars). How can any of us work towards our goals in the context of so much uncertainty?

But even to the extent that I can still formulate goals, there is one set of goals that takes precedence: the parenting problem. Even if I want to wallow in the uncertainty of it all, my kids have other plans. They certainly have up and down days. But their days are my days. I no longer get to choose my own bad days.

Myers explains that: “Autonomous people must be able to pose and answer the question ‘What do I really want, need, care about, believe, value, etcetera?’; they must be able to act on the answer; and they must be able to correct themselves when they get the answer wrong.” (76) Here is the pandemic dilemma. We have time for introspection. We have moments of deep recognition of our wants, needs, desires, values. But the pandemic makes acting on the answer next to impossible. Pandemic parenting certainly means that someone else’s wants, needs, desires, and values take precedence. Maybe this is true of pandemic life in general.

In The Ethics of Ambiguity, Simone de Beauvoir wrote: “It is apparent that the method we are proposing […] consists, in each case, of confronting the values realized with the values aimed at, and the meaning of the act with its content.” What we claim to aim for is important, but what we do – and how it relates to our aims – is fundamental. It demonstrates our true values. What we Really want. Our True Choice, and our True Selves.

Now, to put Beauvoir and Myers together: what we do affects, or builds, our identity. What we do defines what we want, need, care about, believe, value. In short, what we do is who we are.

Striving for a goal, and genuinely taking steps towards that goal, is part of living a meaningful life. Daycare’s planned reopening allowed us the space to take those steps for the first time in a long time.

Three days after daycare offered us a spot, they sent another email. They would not, after all, be reopening. There was not enough interest. Other parents had faced the possibility of sending their kids back to daycare, and had decided to keep muddling through. Many are front line workers who may worry about the risk they pose to the rest of the daycare. Many have preexisting conditions and other forms of vulnerability within their family bubbles. All have good reasons for their decisions. We were nonetheless heartbroken. The hoped-for August disappeared. The plans evaporated. The self that I was starting to see on the horizon receded back into the fog.

In the coming days, many of our school districts, states, and provinces will make decisions about whether, or how, to reopen schools in the fall. They will be just as conscientious and well-thought out as our daycare’s plans. With any luck, they will be backed up by funding promises. But, even with a plan, the pandemic may shift the goal posts once again. A second wave, or a sudden surge in cases, will certainly force us to reconsider any plan.

My attempt to form a plan sits in the shadow of our collective efforts at forming a plan. Each plan sits enmeshed with other people’s plans. With institutional plans. With government plans. And that means that each layer of the plan remains out of any individual’s control.

I find myself once again holding my breath. Perhaps you do, too. After all, we are all in this together.

1 Comment

Filed under Uncategorized

Socratic method or online pedagogy? An E-learning lit review


auditorium-benches-chairs-class-207691In the autumn of 2019, I wrote some pieces advocating for online courses in philosophy (here and here). I wasn’t prescient, and I could not have imagined how important online pedagogy would suddenly become to the entire enterprise of higher education in 2020, but my experience with online teaching suggested that there was something worthwhile in online pedagogy, and moreover that there was something worthwhile about teaching philosophy online.

Here’s the thing: good philosophers are not inherently or innately good pedagogues. This is sometimes as a result of gaps in training, and sometimes as a result of the empirically false assumption that “tradition” or “what we have always done” is necessarily best. I have lots of guesses about why this might be the case, but number one is Socrates. Socratic method is what we need to know to teach philosophy, right?

Well, studying philosophy is not the same as studying pedagogy, and there are many reasons to worry that a significant portion of the methods we use in teaching philosophy are neither state of the art of pedagogy, nor necessarily effective for what we are trying to achieve in the philosophy classroom. What’s worse, many of the tools we use to evaluate the quality of teaching do not, in fact, track quality of learning. But, in any case, many of the tools of face to face pedagogy are not available to us – and would not have the same effects – in the online classroom. The immediate back-and-forth of the Socratic method is just not going to work for most of us in the online classroom.

The online context, and the pandemic-necessitated switch to online learning in particular, lays bare the extent to which some portions of our teaching may be by habit rather than by conscious design.

Very few of us had taught online before March 2020, and even fewer had designed a course by ourselves, from the ground up, for online delivery. I would suggest that we are all getting a crash course in philosophy pedagogy, instructional design, and educational technology all rolled into one. And each one of those is a difficult ask. This post is another attempt to disentangle some more strands of The Problem of Teaching (Philosophy) Online. Below is my attempt to organize some resources relevant to online pedagogy, summarize some of my takeaways, and help many of my colleagues figure out what they are going to do in 6 weeks time.

First, a general theme that comes up in the online pedagogy literature, but that I have not highlighted in my annotations because it is so basic: online education is designed for adult learners. There are of course systems in existence for high school courses to be run remotely, and the Covid-19 Pandemic has generated additional non-ideal systems for remote primary education, but much of what is said in this literature – and much of what I will say here – assumes that those individuals accessing online education are adult learners. That turns out to be an important assumption given the tendency for many university professors to use infantalizing techniques for avoiding academic dishonesty, to use social media to mock earnest questions from students, or to turn “It’s in the syllabus” into a meme (or t-shirt slogan). So, if you have only one takeaway from this post, treat your students as adult human beings, capable of being challenged, worthy of your empathy, and embarking on a steep learning curve of their own – one where they are trying to learn how to learn online, and at the same time learn the course material.

In any case: on with the show. Here is my lit review:

General Advice for online course design:

  1. Mary Burns This is a very clear list of 10 techniques for effective online courses. I take this as a clear summary of common practice and received knowledge in the e-learning industry.
  2. “How to be a Better Online Teacher: Advice Guide” by Flower Darby in The Chronicle is a really thoughtful guide to better, more enjoyable online teaching. They write: “The teaching suggestions in this guide are not revolutionary. Once you read them, they’ll probably seem like common sense. But that’s just the point. Professors often fail to make the connection between what we do in a physical classroom and what we do online. This guide aims to make that connection explicit — to help you think about what you do well in person so that you can do those things in your online classes, too.” The advice includes:
    1. Be present in the online class. “Schedule the same amount of time each week to be visibly present and engaged in your semester-long online class.”
    2. Be yourself. “Capture your personality and your passion in ways that are different from what you might do in person, yet authentic.”
    3. Be empathetic. “Try to envision how your students are experiencing the class.” And design for it.
    4. Explain your expectations. “provide as much meaningful support as you can — without going overboard — so that students don’t have to guess what you want them to do.”
    5. Scaffold learning activities.
    6. Provide examples.
    7. Commit to continuous improvements.
  3. Knowlton, Dave S. “A Theoretical Framework for the Online Classroom: A Defense and Delineation of a Student-Centered Pedagogy” New Directions for Teaching and Learning no. 84, Winter 2000.
    • Knowlton explains and advocates for a student-centered approach to learning, especially for the online classroom, them presents (in part III) a practical example of a student-centered course design for an online learning environment. Knowlton explains that “the faculty role is reconceptualized to allow maximum independence among students” (11). Clearly and explicitly delineating goals, objectives, and learning outcomes for the course is part of centering the student: it makes possible student independence and gives students control of their own learning.

Universal Design and Accessible Design:

  1. Kavita Rao, Patricia Edelen-Smith & Cat-Uyen Wailehua (2015) “Universal design for online courses: applying principles to pedagogy,” Open Learning: The Journal of Open, Distance and e-Learning, 30:1, 35-52, DOI: 10.1080/02680513.2014.991300
    • The Basic argument for universal design in online education (just like in face-to-face pedagogy) is that you have to design your course before you know who is going to be in that course. The students may turn out to have a range of accessibility concerns, and universal design allows everyone – or, at least, a wide range of students – to access the course. Universal design ‘[creates] environments that provide options, learning scaffolds and structures for students with non-apparent disabilities, while also increasing clarity and choice for all learners in the course.”
  2. Ashman, A. (2010). “Modelling inclusive practices in postgraduate tertiary education courses.” International Journal of Inclusive Education, 14, 667680.
    • Ashman (2010) is a reflection on two post-graduate courses using universal design in an online professional development course… about accessible pedagogy in education. Clear discussion of what elements were included in the course design, how they worked, and why they were included. Models the types and frequency of communication with students.
  3. Silver, Patricia, Andrew Bourke and K. C. Strehorn “Universal Instructional Design in Higher Education: An Approach for Inclusion” Equity & Excellence in Education 31: 2 (September 1998): 47-51.
    • Basic explanation of Universal Instructional Design. UID takes the burden away from students to identify themselves and request case-by-case accommodations, and instead builds flexibility and accessibility into course design for all students. “With UID, students may find that many of the instructional accommodations they would request are already part of the faculty members’ overall instructional design. Furthermore, these approaches may benefit all students in the class.”
  4. Jacquart, Scott, Hermberg, and Bloch-Schulman (2019) “Diversity is Not Enough: The Importance of Inclusive Pedagogy” Teaching Philosophy 42 (2) 107-139.
    • This article comes out of a workshop organized by the American Association of Philosophy Teachers, and highlights the difference between (merely) diversifying a syllabus and adopting inclusive pedagogical techniques. Although some of the techniques discussed relate to use of the (physical, face to face) classroom space, the principles are all highly relevant to building an inclusive online learning environment. My argument for online pedagogy in philosophy is primarily about accessibility, this examination of what ‘Inclusive Pedagogy’ means in philosophy is absolutely relevant even though it is based on the face-to-face classroom.

Specific Pedagogical Design goals: Design for building community; designing for academic integrity; Designing appropriate assessments

  1. Dolan, Joane, Kevin Kain, Janet Reilly, Guarav Bansal (2017) “How Do You Build Community and Foster Engagement in Online Courses” New Directions for Teaching and Learning, no. 151, Fall 2017. Designing a good online course can be more time consuming that designing a good classroom course, but with that caveat, the authors suggest (and find empirical evidence that) higher levels of community are possible with appropriate instructional design. Building community requires:

    • Establishing Teaching Presence: regular communications, regular feedback to learners, presence in discussion groups, modelling discussion participation, use of authentic voice and communications: “Modeling desired behavior and creating a safe and supportive environment are two additional ways instructors can facilitate discussions to improve community. In the early stages of an online course, it is important that the instructor appropriately model discussion responses.”

    • Forging Social Presence: Include additional discussion forums for water-cooler chat (pets! hobbies! etc.) and humanize the experience; include introductions; allocate grades/marks for a community building exercise; reward group interactions; require netiquette standards
    • Enhancing cognitive presence: pose challenging questions, use of authentic discussions; problem-based learning
  2. Patricia McGee (2013) “Supporting Academic Honesty in Online Courses” Journal of Educators Online 10 (1).
    • “While there is a perception that more academic dishonesty occurs in online environments, there is little evidence to support that this is the case.”
    • “The author takes the position that if instructors and designers construct courses strategically, they can promote a culture of ethical behavior while making cheating and plagiarism unattractive, difficult to achieve, and apparent to the student.”
    • “Five forms of academic dishonesty are evident in online courses: collusion, deception, plagiarism, technology manipulation, and misrepresentation.”

    • McGee advocates: Make academic integrity expectations clear; select appropriate forms of assessment for what is being assessed; low stakes assessments are less likely to trigger cheating; make the most of the technology including randomized testing and confirming test-taker identity; and use pedagogical strategies such as engaging the learner to make choices and take responsibility for their learning;
    • one strategy that McGee advocates that I have found immensely helpful: have students apply personal experience or current events to material.
    • Institutional policies, such as honor codes and orientation to academic integrity policies, help support academic integrity both on and off campus.
    • Note that plagiarism detection software is imperfect, and may have the unintended consequence of orienting students towards certain types of academic dishonesty other than straight copy-and-paste plagiarism.
  3. Gulbin Ozcan-Deniz, “Best Practices in Assessment: A Story of Online Course Design and Evaluation” 2017 Assessment Conference Drexel University, Conference Proceedings.
    • Ozcan-Deniz notes that learning outcomes will be the basis for the design of learning assessments, and ought to be designed first, and made explicit to students. Exams worth a large percentage of the grade (20-30% or more) might not sufficiently track progress over time. Smaller, more frequent assessments throughout the semester do a better job of tracking progress over time. Formative assessments rather than summative assessments work best for online learning environments.

Specific Pedagogical Techniques

  1. Norm Friessen “The Lecture as a Transmedial Pedagogical Form: A Historical Analysis” Educational Researcher Vol. 40, No. 3 2011, pp. 95-102.
    • Argues that ‘The Lecture’ is an adaptable pedagogical form, and that it can bridge oral communication with technology and written communication. Something I found relevant and useful to the online teaching question of lecture length: Lectures are a pre-printing press way of disseminating text. They were historically transcribed, and require significant note taking. When you record or assign a long (asynchronous, pre-recorded) lecture, you are effectively assigning a long text.
  2. Rob Loftis “Beyond Information Recall: Sophisticated Multiple Choice Questions in Philosophy” AAPT Studies in Pedagogy Online First December 20, 2019.
    • Loftis argues that multiple choice tests should be used in philosophy wherever essay evaluations are used, and that multiple choices tests can be very good for formative assessment. Loftis’s argument is that “Multiple-choice questions should be a part of a diversified portfolio because 1) they consolidate problematic subjectivity in a way that makes it easier to manage fairly, 2) they increase the diversity of evaluation portfolios in a way that balances out the virtues and vices of writing assignments, and 3) by increasing the diversity of the evaluation portfolio they increase the inclusiveness of the course.”
    • My hot take: Multiple Choice questions – which are auto-marked by course management software – are extra useful for managing medium to large online courses. Here’s why: first, they can provide immediate feedback to students, which (according to Dolan et. al.) means that they help build community and establish teacher presence in the course. Secondly, they can test skills that are higher on Bloom’s taxonomy, yet low stakes and part of a larger pool of questions, which means that (according to McGee) they can be part of a strategy for disincentivizing academic dishonesty. And thirdly, for the instructor, they locate the subjective and high stress part of student assessment at the stage of designing the test, and are therefore MUCH less onerous to mark, and more fair and equitable to students.

Higher Education Policy and Online Learning

  1. Christopher Hill and William Lawton (2018) “Universities, the Digital Divide and Global Inequality” Journal of Higher Education Policy and Management 40:6, 598-610.
    • Explores global inequality with respect to higher education, and examines the disruptive potential of online higher education. Hill and Lawton find that online and distance education do not fulfill their potential to disrupt inequalities, but rather maintain the status quo.
  2. Feenberg, Andrew (2017) “The Online Education Controversy and the Future of the University” Foundations of Science 22, 363-371.
    • Feenberg worries that neo-liberal economic pressures on higher education are driving the move to online education. Although he supports technology use to support academic life, he argues that economics and technology are driving the agenda, rather than educational and academic values being supported by technology. He ends with this reminder: “it is up to faculty and students to steer educational technology in a direction that enhances rather than degrades higher education. They must resist attempts to change the very meaning of education to accommodate the limited features and capabilities of the available technology and instead pursue the ‘‘art of living with technology’’ creatively.” (370). Feenberg is particularly concerned about the suggestion that some administrators and administrations in higher education view online education as cost saving and income generation.



Leave a comment

Filed under Uncategorized

Reinventing the Wheel and DIY Online Instructional Design

Even for those of us who have taught online courses in pre-pandemic times, and those of us who have crashed our courses online mid semester in 2020, many of us now find ourselves in a third completely new situation with respect to online teaching: suddenly teaching an entire course load online, and with inadequate timelines and insufficient supports for adequate course development.

We have not done this before. Indeed, virtually no one has done this on this scale before. We are not in a position to create the best online content. We do not have a semester of course release to develop each course. Moreover, we are designing multiple courses at the same time, and doing so in non-ideal circumstances. (Cough. No child care.) Instructional design specialists are in demand, overworked, and too few and far between. Rather than months of one on one support from an instructional designer, many of us are scrambling our way through YouTube tutorials, zoom seminars, and online user manuals for instructional design. Many of us are not paid over the summer. The development of these courses alone is already overwhelming. The circumstances in which we are doing so can make it feel impossible. There are lots of reasons to be wary of what is going on, and to be cautious about how we proceed. To highlight one that carries over from my previous posts: fully supported online course design typically takes months per course – and should be compensated as such. In this situation, it will not be.

In a nutshell, the central reason that I advocate for online pedagogy is its accessibility. Or, at a minimum, its potential for accessibility. Online education has its origins in distance education, and the basic principle of distance education is that the learning comes to the learner. It also, as it turns out, comes to the instructor. That has proved useful in the pandemic context where many of us are also teaching from home. We each access the entirety of the course in our own space, in our own time, and on our own devices. That’s one way that distance education has a starting point that is more broadly accessible than face to face education, which requires us to fit ourselves into other people’s spaces, at scheduled times, with all of the limitations entailed by those physical spaces. Accordingly, the first principle of online instructional design is, and should remain, accessibility.

But, given that most of us have little to no experience of designing a course for online delivery, this post is an attempt to offer a preliminary framework for thinking about accessible and student-centered instructional design for online environments, and to provide some questions to think through as we start this daunting process.

1. Learning Outcomes

A really helpful way to stay focused on a student-centered paradigm is to think explicitly about learning outcomes. Think carefully about these at the outset of course design, because you can (and should) ultimately use them to guide every subsequent choice: from course material, to assessments, to lecture formats. Depending on the course, the educational outcomes you focus on are likely to include some blend of content, skills, and competencies. What do you want students to achieve – what do you want them to be able to DO – by the end of the course. Write an essay? Think critically about certain material? Use truth tables? Build connections between the material and real life experiences? Identify real world examples of fallacious reasoning?

Whatever outcomes you are aiming for, you will be building a path towards those outcomes starting from your initial design choices. Every design choice from your choice of course materials to choice of format and frequency of assessments to your systems of contact with the class – these are the students’ route to the learning outcome. So, start by defining an end point, then break down the steps that will guide your students towards that end point.

Even if you have never (explicitly) started with learning outcomes before, I think that this is a good place to start for designing your first (and second, and third) online course. Focus on the end point, and reverse engineer a path towards it. Think about how each piece of content – each reading, each video, each quiz and each writing assignment – builds towards the learning outcomes. There may come a time when you have to edit your course – use the learning outcomes to structure this editing process. For each element of the course, ask yourself how it serves the learning outcomes. What other purpose does it serve? Can the desired outcome be achieved in another way? Is it already being achieved in other ways? (Hint: duplication is not a bad thing. Accessible design may include multiple and flexible routes to achieving the same outcomes.)

In a face to face learning environment, we would be doing this implicitly. We would be guiding our students towards certain educational outcomes – partly through our course design, partly through our allocations of contact time, and partly through the feedback we offer on each assignment. But we might not do it quite as deliberately or consciously as we are forced to for online learning. In online learning, it will not happen by accident.

Moreover, in face to face contexts, we would have a certain flexibility and agility that we will not have in an online learning environment. We might have the capacity for changing the direction or method of a particular lecture if (gasp!) it were not going as planned, or for granting the whole class an extension when lecture is cancelled for a fire alarm or a snow day. Course corrections are still possible in online environments, but – again – online course components can be more unwieldy, and course corrections will take more effort.

In online learning environments, a significant portion of the ‘guiding’ towards educational outcomes is designed into the very shape of the course: it is built into the pattern of each week, the relationship between weeks; it is in the relationship between each piece of material, the pieces that preceded it, and the pieces that will follow. Best practices in online education suggests that we should have small content blocks with frequent low-stakes evaluations or self-evaluations to help students make progress through the material, and also account for flexibility in course delivery. (Remember the fundamental possibility of accessibility?) But these techniques also help students make progress through the course.

What are the content components that build towards the outcomes you have in mind? Think about the rhythm of the course and the overall shape of the course. Think of small pieces of content as the building blocks. (Can you tell that I’ve been without child care since March 13?). Use small components to build a larger structure. Short readings and short videos. Provide questions to think about as you read – sometimes called ‘guided readings’. Notice that you need some (foundational) competencies before others. You need a stable base of content and competencies before you can start to build ‘up’.

Brainstorm what types of small building blocks will make up your course, and then start to build them into patterns. Some readings or understandings will have to come before others. This is how we often build face to face courses, but in those cases our ‘content blocks’ may be 1.5 hour lectures or 40 page readings. In this case, think much smaller. Think of 10 – 20 minute activities. Larger numbers of shorter readings. Choices for students. (Perhaps: read any 3 of the following list of articles and podcasts by the same author.) It may be helpful to build pass/fail or 2 mark tasks to demonstrate completion of a certain number of readings before moving on to more complex tasks. All the better if the quiz can auto- generate immediate feedback for the learner. Gamification principles might help. Or, simple checklists built using course management software might be enough of a nudge to keep students focused on end points and end goals and progress.


2. Steer the Skid.

Focus on where you want to end up, but be aware of what you are trying to avoid.

The three biggest worries that I hear about teaching online are marking burden, disengagement (disengagement by learners or by instructors), and academic dishonesty. These are all legitimate worries. Many of them are borne of experience. My advice here is first and foremost for you to recognize what you are trying to avoid. Whatever you are most worried about – flag it for yourself. Be honest with yourself that “this is what scares me”. Then, design your course to help you avoid it. Build it into your end point and your choices of path, along with learning outcomes.

So, let’s take the three worries in turn:

To manage your marking burden, front load your work (and mental load) by writing sophisticated multiple choice questions that auto-mark and give immediate feedback. The next way to alleviate marking burden – and also front load the mental load – is to write a detailed descriptive and numeric rubric. Pre-release the rubric to students so they can see what you will be looking for. Many pieces of course management software allow you to pre-load feedback forms. Take the time to do this. But – to the extent that academic dishonesty is also a worry – set both your multiple choice and your short answer quizzes to randomize questions from a question bank (of questions worth equal marks). It may serve your pedagogical (outcome oriented) purposes to give students a 2 question ‘comprehension’ quiz at the end of each content block – but if you can build up 4 or 5 different questions for your question bank, the students will write variations on the same quiz rather than identical quizzes.

These two strategies (auto marked multiple choice and rubric-assisted marking through course management software) will help you deliver meaningful feedback, and to do so quickly and efficiently. They make good use of the modalities of online course design.

One important strategy for maintaining engagement with students is meaningful communication. That doesn’t necessarily mean constant synchronous availability. But regular and predictable presence in the learning environment can keep everyone engaged. If you keep showing up for your students – by checking in, by sending out general announcements, by replying to questions on course discussion forums, by providing feedback, and by making yourself available for office hours – you set up the possibility of engagement. You set up the possibility that they will show up for you.

Then, make a schedule for yourself to do all of these. Send out personal emails and make general announcements. Recognize or acknowledge local and world events as they are relevant to the course – as you would in a face to face class. You are, after all, still a flesh and blood professor interacting with real live students in a real world, albeit mediated through the computer and technology.

You may find that in an online environment it takes more conscious effort to use an authentic voice in your communications with students. Start by making a genuine attempt to empathize. From Flower Darby’s Chronicle guide to online teaching comes a reminder to put yourself in the students’ shoes. Do so at the stage of course design, and remind yourself to do so regularly throughout the course.

When you are trying to avoid academic dishonesty, start by being clear about your expectations. Pre-releasing your rubric, or perhaps even using a specifications grading system may help. Clarify relevant features of academic integrity policies as they apply to a particular assignment, unit, or course as a whole. If students are invited to discuss with a peer at a certain stage but not others, explain that. And then – yes, use question banks for quizzes and tests, avoid overly broad essay questions that are more likely to be easily available for purchase, and use anti-plagiarism software such as SafeAssign and TurnItIn. But do all of this openly and honestly through reasonably transparent communications. If you are worried about academic dishonesty, don’t attempt to entrap your students. Rather, try to be clear about what constitutes academic dishonesty in this context, or about what your pedagogical purposes are for a particular assignment.


3. Size Matters

If you have 14 students in an online class, it may be feasible to assign long essays, review drafts, and allow re-writes. Although, be warned that screen fatigue is real, and that even in a small class, you may want to have your students complete assignments in ways that allow you to print them out and mark on paper and offline.

When you have 220 online students, regardless of your level of TA support, responding to email enquiries may take up a measurable portion of each week, and marking may be incredibly time consuming. Downloading or uploading 220 text boxes for marking takes more time than flipping over 220 sheets of paper.

All of this is to say that the numbers matter in online education, perhaps more than in face to face education. (And for any university administrators that may advocate for removing caps in online courses, feel free to read that again.) They matter for your experience of teaching the course, for the students’ experience of taking the course, and for the feasibility of different design options. So, take stock of the expected size of a given course, and of the total number of students you will be teaching across all courses, and design each course with size and numbers in mind.

Then, structure your time – your weekly plan – accordingly. Remind yourself that there are only so many hours in the day, that there will be technical glitches along the way, and that self care and rest should be part of everyone’s schedule.

Help structure your students’ expectations in a number of ways. Be up front with your students about your email policy. I tell students that it may take up to 2 working days for me to get back to them. I tell students that I will check email regularly between 9 and 5, Monday to Friday (or – this was my policy pre-pandemic. I may have to revise for the foreseeable future), and that I will check irregularly or not at all over night and on weekends. Set up systems – often a discussion board will be helpful – to allow students to ask each other about where to find something in the syllabus.

May Day 8 hours image

May Day image: 8 Hours for Work; 8 Hours for Rest; 8 Hours for What We Will.

In the context of planning for Fall 2020 online courses, there is a temptation to reinvent the wheel. To try out all sorts of new technology, but to try to keep the teaching (and learning) as familiar as possible. My suggestion is to think about Marshall McLuhan, and remember that the medium changes everything. But that’s okay. Online education cannot be the same as face to face, and it should not be. But online education is not new. It was not invented in the pandemic context. And it can serve these purposes, if we allow it to.

Don’t try to reinvent the wheel. Rather, in this context, try to aim for where you want to end up, and let that goal guide you.

Leave a comment

Filed under Uncategorized

Avatars and representation


Before the pandemic, I was already a regular Zwift rider. In a recent women-only group ride, the chat turned to the limited range of available avatars on Zwift. One rider lamented the fact that recumbent bikes and hand bikes are not available as avatar bikes in Zwift, even though a number of cyclists use the game with these and other accessible stationary trainers. The chat eventually turned to the related question of representation at ZwiftHQ. The assumption was that Zwift HQ must be predominantly male and able bodied, and that (an assumed) lack of diversity in the design room must be at the root of the lack of diversity amongst design elements for avatars. I have no insight into the design room at ZHQ, but I have thought a bit about how digital decision-making is made, and about the pros and cons of more representative avatars.

We know that lack of diversity in a given setting in the real world has detrimental effects on those with marginalized identities. And we also know that the online context does not automatically reproduce real world social contexts. Rather, the online context is an artefact, and it is the product of multiple choices – or, indeed, omissions of choice.

But, we cannot solve a problem if we do not recognize it. Feminist epistemology teaches us that we all make observations, and ask questions, from a situated perspective. From positions of privilege, some of us may make the mistake of assuming that our perspective is shared or universal. It is not. A helpful takeaway from feminist epistemology is that all perspectives are limited, but some perspectives are better situated to recognize these limitations. From positions on the margins, we have better levels of recognition that our perspective is on the margins, and is not universal.

In the design room, false universalizing has implications. The design team has set up a range of options. They presumably have reasons for setting up the avatar choices as they have. But the worry raised by the in-game chat is that some of the choices are merely omissions. The possibility that is troubling me – and was troubling a number of participants in the group ride over the weekend – is that the choices on offer do not seem limited to the design team. If there is no one in the room to alert the design team to the problem, they may be issuing that their perspective is universal. This is where the in-game chat could be useful.

The failure to offer a more diverse set of avatar options is a choice on the part of the game designers. The failure on the part of the designers at ZHQ – whether it is an omission or a deliberate choice – has implications for the choices available to users.

There are social game-related reasons to have a more diverse set of avatar options. It is damaging to women, people of color, and individuals with diverse abilities to feel excluded from the the outdoor and fitness industries. Damaging in the sense of exclusion from fitness, and damaging in the sense of exclusion from community. In this case, users are customers who pay a subscription. And if the game were more diverse, it might be more welcoming to more people. The outdoor industry has been pushed to recognize the problem, but Zwift is an odd hybrid between indoor and outdoor fitness worlds.

There are game-related reasons to include recumbent bikes: they are more aerodynamic on the downhill, but can be more difficult to manoeuvre on the uphill. So, they could offer a game-specific accessibility lesson.

At the user end, it can make you vulnerable to appear as yourself, and it can make you vulnerable to appear different. So, many are faced with a dilemma. It may be safer for many players to opt for a non-representative avatar rather than to out themselves as ‘different’. It can be safer to blend in. Anonymous or pseudonymous social interactions can be protective to those with marginalized identities. Being ‘different’ or appearing as ‘Other’ can make you a target.

Avatars are only ever problematically representative. They may allow affirming options to self-identify in various ways, and to thereby seek out community within the game. They may also create passing privilege and the ability to blend in, to be less vulnerable in the virtual world.

My take? The vulnerability of appearance and avatar choice ultimately has to be made by the user. Zwift is doing us a disservice if our identity is not available as an avatar. But the vulnerability that accompanies representative avatars has to be the user’s choice. A more diverse array of avatar options would allow the user to make the choice.

Image description: A screen capture of my Zwift avatar in side view. Wearing a Betty Designs cycling jersey with skull and crossbones icons on the arm and leg. Image taken near the end of the second lap of the TFC Mad Monday Workout Series in virtual Harrogate.


Leave a comment

Filed under Uncategorized

Digital Divide: bridging the gap

In the context of the Covid-19 pandemic, we are all getting a very tangible lesson on the digital divide: on how deep (and wide) the technological divisions within our society – and between societies – really are. The fact that there are digital ‘haves’ and digital ‘have-nots’ is having much more profound implications for many people than it ever has before. And ultimately, profound implications for how difficult bridging these divides continues to be.

One component of the digital divide is the geographic one: remote regions may lack the infrastructure that is a pre-condition for accessing high speed internet. The missing bridge might include a lack of high speed cable. Or, the price of connecting to the internet via satellite. A rural region may be too sparsely populated to make the infrastructure investment economically viable. The physical distances require a combination of innovation and investment, and neither can be overcome within a short time frame. A region may not have reliable enough electricity to run the servers, or to justify the investment. For these reasons, the digital divide has sometimes been depicted in spatial terms: as a problem that can be ‘mapped‘. Of course, no map is perfect, and all map makers must choose which information to include and which to omit.

So, even in this depiction of the digital divide, the geography has never been the whole story. There are political decisions being made in determining which threshold counts as an adequate speed, which regions are recognized as under served. For a small minority, living and working in a remote region may be a matter of choice and even a matter of privilege. And the existence of choice – even if only for a few – can be used to add credence to individual and consumer-focused solutions. These typically come at the expense of broader policy and regulatory changes.

But for many, geographically remote living is a barrier to digital access. That barrier might be overcome with time and money, or it might be overcome with significant policy changes, but it will not be overcome quickly in the current crisis.

The concept of the digital divide also references a social and economic inequality, and this aspect has also become more pronounced in the pandemic context. Libraries and schools serve as a portal to the internet for many, and coffee shops or public spaces serve as wifi portals for others. In the current pandemic, and as a result of social isolation and physical distancing practices, many have been cut off from digital access completely. In this sense, the digital divide has been made worse by social isolation policies in particular.  

Although smart phones had already replaced landlines for many, in the current context ‘access’ has moved very quickly beyond smart phone capabilities. Or, for that matter, beyond what a whole household could do with any single device (whether it be a smart phone, a tablet, a laptop, or a desktop). Of course, for those living with food or housing insecurities, reliable digital access remains elusive. Even a single smart phone is out of reach for many. Precarity is real, and individuals or families may have to choose between digital access and food or shelter. The difficult calculation that made sense a few weeks ago may be difficult to change at this point, and the implications may have changed. For many, internet access has become a prerequisite to food access in the context of a quarantine.

Finally, the digital divide can include an aspect of digital literacy. As many of us scramble to download new apps and access new platforms, several vulnerable populations are left out. The very old, the very young, or the recent migrant may not have either the physical hardware or the digital literacy (or, indeed, the relevant language literacy) to be able to access essential services and communities. Simultaneously, the emergency-augmented level of digital literacy that many users are now practicing on a daily basis leaves other communities behind. For example, when we fail to caption our videos, or fail to describe our images, or fail to use a reader-compatible font.

Whereas the digital divide has often been treated as a symptom of systemic inequality, in the current context it might better be understood as a midstream causal node: it has both upstream causes, and downstream implications. Upstream, the digital divide has come about because many social, cultural, and economic inequalities have implications for digital inequality, and many have been viewed as low-priority.

But downstream, the digital divide has implications in terms of educational and health access, and ultimately in terms of educational and health outcomes.

As each service moves online, as each new ‘digital equivalent’ gains popularity, we have to keep asking: who is being left behind?

There has been a push for access to education technology as an equity issue. And, of course, the digital divide in the current pandemic means that many of those with inadequate digital access may simply be unable to access health care.

There is an imperative to keep investigating the digital divide and its multiple dimensions. We can only narrow or eliminate the digital divide to the extent that we understand who is being left behind, and in what way.


Leave a comment

Filed under Uncategorized

Pandemic reflection

Where were you on September 11, 2001? On 9/11, I was an undergraduate student at McGill. I turned off my radio just as the announcer said something about ‘New York’ and ‘plane crash’. I made it in to class without understanding the significance of those words. I emerged onto the Arts steps to an entirely different world. Nearly 20 years later, I can still tell you in alarming detail how I spent the morning, even though I was in Montreal, and I was, all things considered, okay.

I searched for news, but every news site trailed off after a couple of sentences with, “more to come”. I sent off emails, but no one was checking. I made phone calls, but no one was answering. I can remember looking to make contact. I went home. I can remember the panicked moments in my apartment, alone. My roommates were in class. My mother was in another city, and my partner was in another country and time zone. In 2001, we depended on landlines to contact each other, and we depended on News Media to dispense the information, but there was no one to contact, and no further information to dispense.

In the context of Covid-19, now as a parent and teacher, I find myself thinking about my 9/11. It feels the same, and it feels different. It feels apocalyptic. It feels like it is all happening too slowly, and too fast. It feels like the world is ending, again. It feels mundane, but significant. It is exhausting.

The fact that I am replaying 9/11 in my mind, and comparing and contrasting, has taught me two things that I did not know a week ago:

(1) I still carry trauma from 9/11. I did not have a difficult 9/11, as far as the range of 9/11 experiences goes. But I nonetheless carry trauma that was buried so deeply that I needed another trauma to reveal it to me and,

(2) This is also traumatic. We are collectively experiencing something traumatic, and we can’t do it together in the ways that we otherwise would. Most of my (and many of our) coping mechanisms are exactly the opposite of social distancing.

In many ways, I am not okay. But, as a parent, I have to maintain a level of control. I am trying to keep up strength and resolve and empathy and understanding. As a teacher, I am trying to pay attention to the trauma. I am worrying about what everyone will remember, what I will remember. I am exhausted. I alternate between falling asleep while reading bedtime stories, and staying up too late as I try to read all the news, and try to parse this new world.

So, as a parent and a teacher, I am trying to pay attention to the memories being etched, to the lessons being inadvertently learned. I am trying to stay calm, but – the emotional labour required is astounding.

Many of us will remember. Many of us will remember March 2020 in alarming detail. The way that we were jolted out of whatever security and routine we have in our precarious lives, and, if we are lucky, the way that we were made to feel secure in a suddenly different routine. The way that we are listened to, or not. The way that we make connections, or not. The way we feel lost, or found.

Whether you are a student or a teacher, a parent or a child, a young adult or young at heart, what you do will have an impact on others. In this time of jarring news bulletins  half an hour apart, your fear and panic, or your calm and resolve, may be etched in memory. But it will also impact those around you. Know this. Try to acknowledge it as a parent or a teacher or a child. As a brother or sister.

It is exhausting trying to carry on as though everything is fine. It is not fine. Something has to give. For me, it is my expectations. It is my work. And I am trying to hold my emotions together for the kids. I am grieving, but I am not quite sure what for. I am grateful for any emotional connection I am able to make. For all of the check-ins I have received. For responses to any of my attempts to reach others.

So, know that you are not alone. Keep reaching out. Keep checking in. Acknowledge where you are, and consider acknowledging it to others. Many of the ways that we would attempt to connect are not available. Social distance is hard. Physical distance is difficult. It can also be revealing.

It is revealing of ableist assumptions that this is a choice. It is revealing of an often false assumption that home is a comfort, or that home is a safe place. It is revealing of the extent of the digital divide.

It can be revealing of the depth of our empathy. So, take care. Take care of yourself and your others. Take care of your students and your neighbours.

Leave a comment

Filed under Uncategorized

Online Teaching on an emergency basis

My two previous posts on online teaching in philosophy were written in a context of reflection about the accessibility benefits of online teaching. I believe that online learning is a good thing, and that it can be enjoyable and intellectually stimulating for both students and instructors. But, of course, everything that I have learned about online teaching, and how it differs from face to face teaching, I have learned over time. I did not learn any of it (or very little, in any case) the first time I taught online. I learned it by reflecting on my experience. And I learned to do it better with each iteration of an online or hybrid course. Building an effective and pedagogically thoughtful online course takes time.

In the current pandemic push to online teaching, we do not have that time. In the last week, and especially the last 2-3 days, dozens of universities and colleges in the US have decided to close their campuses and move to remote learning models for an indefinite time period, effective immediately.

In many ways, my previous arguments in favour of online pedagogy in general and online philosophy courses in particular still apply:

  1. Online and distance education foregrounds accessibility. It meets the student where they are. (It is designed to go to the student, rather than requiring the student to come to the classroom).
  2. It pushes us to think about the pedagogical purpose of each piece of content, and to think about the best delivery methods for each of those pieces of content.

But these virtues are not innate to the medium of delivery. We can deliver drastically inaccessible courses online. Moving online does not force us to think about pedagogy; it merely nudges us in that direction. Indeed, many of the suggestions and resources that are being offered- especially the oft repeated suggestion that we use zoom for everything from virtual meetings and conferences to seminars to lectures to office hours – seem to prioritize replicating the ‘feel’ of face-to-face learning, at the expense of attention to what is distinctive about online learning. (NB – if you decide to use zoom to allow students to synchronously attend a live lecture, you will also have to set up an autocaptioning program, and will likely have to record and download the live lecture, edit to correct mistakes in the captioning, then upload again.)

Switching from face-to-face to online delivery midstream is something of a Herculean task. This is because so much of best practices in online teaching occur at the design stage, prior to any interaction with students. Some of the things that we might have done differently if we had expected to switch to online part way through include: choice of content; choice of readings; choice of methods of assessment; registration caps.

There is no part of switching to online delivery that is automatic. For the most part, it cannot be automated except to the extent that existing content was already being delivered in a hybrid format. Every single piece of course content that is moved online needs to be re-thought. So, in this pandemic moment, when university administrators are requesting that we just switch delivery formats, they really are asking a lot. Ideally, they ought to also empower us to rethink the remainder of our course material and start again, if need be. They ought also empower students to be involved in this process. 

My two cents about what to ‘keep’ as we move online: try to keep the learning outcomes constant, and be willing to radically change the content, style of delivery, and methods of assessment in order to do so. Switching formats without paying attention to the learning outcomes will tend to change or undermine existing learning outcomes, lose student interest and engagement, or overburden instructors.

Despite what your university may be advocating, 1 hour lectures synchronously delivered via Zoom are not best practices for online teaching. Online conferencing software such as zoom may work in some cases to replace some sizes and styles of in-class discussion, but their capacity to work for anyone may be compromised by the simultaneous overwhelming demands being put on servers, and on the zoom platform in particular. This is your chance to re-think your use of, and the format of, lectures.

Some best practices for online content:
  • Scheduling: Don’t assume synchronous login. Design for the likelihood that some students will have to access content in their own timing.
  • 10 to 15 minute content blocks, ideally accessible to students at times of their own scheduling. Add a checklist or to each module or group of content blocks.
  • You may need shorter, more accessible readings and videos that you might otherwise have relied on.
  • Matt Crosslin’s Emergency Guide to Getting this Week’s Class Online in About an Hour suggests devoting some time to searching for this content in a pre-existing format, rather than building content pieces from scratch.
  • Amongst your 10-15 minute content blocks, vary the activity format. Some text-based (reading), some text-based (writing), some video (captioned).
  • Frequent check-ins with students! “Here is what I am going to be doing today, here is what I hope you will find the time to do in the next 3 days, here is what I am hoping you will have done by the end of the week. If any of this is going to be a problem for any reason, I am available in the following ways at the following times.” These supplement your checklists, but also help students gauge expectations.
  • lots of small low-stakes or no-stakes assignments. Some of my favorites are:
    • 240 character summary of the article. Share with a friend in the class/compile into a document for all students in the class.
    • Short, opinion based writing prompts. Compile via a shared google doc or post to Learning Management Software Discussion Boards.
    • Open book multiple choice question auto marked by course management software: which of the following best approximates the thesis/central claim of the article? (These questions take time to write well, but force students to do a close reading.)
    • Paraphrase a key claim that you (as instructor) want to emphasize, and ask students to find a quotation (correctly cited) that matches the the paraphrase. Have students email you or (better yet) set up a discussion board such that other posts remain hidden until a student posts their (initial) response. This might be too tech-y for those who have not used online LMS before.

Here are some great resources, some of which are already linked above:

  • Focusing on the principles of effective instructional design for online courses, rather than the micro-level mechanisms.
  • This is the best single resource that I have found explaining very concretely how to effect accessible online pedagogy. Online teaching can be accessible, and should be accessible. Includes an incredibly helpful sample questionnaire to check in with students.
  • Concrete techniques to avoid be overwhelmed by the tasks involved in ‘switching’ to online teaching. Includes a link to an accessibility check tool that can be added to your browser.
  • This offers very nearly the opposite of my advice above, but we agree on a lot of things. We agree on frequent check-ins with students, on involving students in the conversion process, and on taking stock of available resources. Betsy Barre advocates doing your best given your experience, your skill set, and your starting point mid-semester, and I wholeheartedly agree with the sentiment. She writes: “The resources I shared above, and that will be shared in the coming days, will have a lot of information about how to teach an online class well (yes, it can be done!). Some of these tips might be helpful for you, but most will be far beyond the scope of what you can or should do in this situation. You’re not going to teach a well-designed online course in this scenario. And that’s OK.”

I’ll add more emergency online pedagogy and best-practices in online pedagogy links as I find them.

Leave a comment

Filed under Uncategorized

On knowing the ‘why’ and the impossibility of ethical AI

Within normative ethics and philosophy discourse, ‘morality as a system of rules’ has come to feel like a straw man position. Yet, it is one that is frequently invoked in many applied ethics contexts, even while its controversial status is acknowledged. Health care ethics often invokes a ‘Four principles‘ approach to ethics. Media ethics continues to invoke a similar set of principles. Digital ethics sometimes follows the trend, and AI ethics also seems set to follow. All of these cases rely on statements of shared or universal values and effectively invoke the idea of morality as a system of rules. Consider some major AI developers’ statements on ethical artificial intelligence (italics added):

Microsoft claims that “Designing AI to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and timeless values“.

IBM claims to understand that ethics would have to be embedded in the design and development process from the outset, and that it is not something that can be ‘added in’ after the fact. Their document on AI Ethics explains: “An ethical, human-centric AI must be designed and developed in a manner that is aligned with the values and ethical principles of a society or the community it affects. Ethics is based on well-founded standards of right and wrong that prescribe what humans ought to do, usually in terms of rights, obligations, benefits to society, fairness, or specific virtues.”

Google’s DeepMind has a commitment to “researching the ethical and social questions involving AI” in order to “ensure these topics remain at the heart of everything we do.”

The commitment to an ideal of ethical AI is important, but it might also be a contradiction. My worry is that ‘ethical AI’ might rest on a deep misunderstanding of moral philosophy, and on a mistaken definition of what constitutes the ‘ethical’.

So, why have ‘moral theory’ and ‘ethics’ moved away from principle-based approaches even while digital and applied ethics are embracing the idea of moral principles? The answer, on both sides, could be one and the same: ethics is too complicated. That ethics is complicated is a reason to want to simplify things. It is also a reason to reject simplification as inherently flawed. AI ethics, and a lot of applied ethics contexts, tends to rely on the former, while moral philosophy moves us towards the latter.

Consider Margaret Olivia Little’s explanation of the theory called ‘moral particularism’: “it argues, (in)famously, that the moral import of any consideration is irreducibly context dependent, that exceptions can be found to any proffered principles, and that moral wisdom consists in the ability to discern and interpret the shape of situations one encounters, not the ability to subsume them under codified rules” (32).

Moral particularism argues that understanding how context works is an essential part of understanding rules. It argues that what it is for something to be a moral rule is for it to be contextually dependent. Understanding moral generalizations requires recognizing paradigmatic contexts and distinguishing them from deviant contexts. Moral generalizations and moral rules, according to particularism, are subject to reversals or ‘valence flipping’: every putative moral rule or moral generalization could become its opposite, given the right context. Lying is wrong, but lying to the gestapo to protect a friend is the right thing to do. It is not right in spite of the lying. It is right because in the context, lying is right. Context matters.

Since moral decision-making requires close attention to context, it is not hard to see why tech ethics and AI ethics would be in trouble. AI notoriously has difficulty with context. 

The deeper problem for AI and digital ethics is that, according to moral particularism, paradigmatic contexts cannot be defined through statistical patterns. Statistically speaking, most contexts could be deviant with respect to a particular piece of moral wisdom. Moral understanding would require understanding what makes them deviant, not that deviance is a general pattern. Part of learning the rules is learning when to break them.

As Little explains, “When we issue a generalization to the effect that something has a certain feature, sometimes what we really want to say is not that such a connection always, or even usually, holds, but that the conditions in which it does hold are particularly revealing of that item’s nature… [We] are taking as privileged, in one way or another, cases in which the item has the feature specified” (37). 

And yet, that is how machine learning works. AI looks at huge swathes of data, and its neural networks ‘learn’ the patterns. Jason Pontin writes:  “Deep learning is math: a statistical method where computers learn to classify patterns using neural networks.” AI essentially does pattern recognition extremely well. The problem presented by moral particularism is that ethics is not a pattern, and so ethics cannot be recognized by AI. Representing ethics as a pattern is a reductionist misunderstanding of ethics. Ethics is as much about providing morally appropriate reasons as it is about doing the right thing. On a Kantian moral theory, we are tasked with doing the right thing for the right reason. Little draws on Aristotle to explain that “the person of moral wisdom must know the ‘why’, not just the ‘that'” (32).

We have known for a long time that ethics could not, and should not, be reduced to an algorithm. Even rule-focused moral theories suggest that moral wisdom requires more than blind rule-following. Distinguishing right from wrong requires judgment. Doing the right thing adds more judgement. Determining which rule applies, or which of two conflicting rules takes precedence, relies on a deeper understanding. These add up to a non-codifiability thesis. It has often been argued that ethics is not, in fact, codifiable. In the context of digital ethics, this has frequently led to the conclusion that ethics cannot be programmed.

But AI is not programmed in the same way. Neural networks and their programming function as “black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases.”

AI is different from previous forms of programming. AI does not codify the rules. AI, or machine learning algorithms in particular, ask a computer system to look at huge swathes of complex data and ‘learn’ it as a pattern. It develops the ‘ability’ to follow the pattern, even while it is not asked to explain its predictions. It is not necessarily ‘taught’ about the reasons, so its ability to generate an explanation is limited. Sometimes AI gets it spectacularly wrong. In part, this is because: “their statistical way of learning makes their talents narrow and inflexible. Humans can think about the world using abstract concepts, and can remix those concepts to adapt to new situations. Machine learning can’t.”

And while next generation AI may well aspire to be more context-responsive, the worry about AI doing ethical decision-making is a deep one.

In a recent piece examining the impossibility of ethical AI, Tom Chatfield explains that “there is no such thing as ethical A.I, any more than there’s a single set of instructions spelling how to be good — […] our current fascinated focus on the “inside” of automated processes only takes us further away from the contested human contexts within which values and consequences actually exist.”

I would go further. Taken together, the decontextualized nature of ‘AI ethics’ and the deeply contextual nature of moral reasoning suggest that ethical AI is impossible. This conclusion follows from the very idea of moral wisdom, which requires not only knowing what to do, but why it is the right thing to do. Perhaps the impossibility of ethical artificial intelligence is better highlighted if we shift our goal to ‘ethical artificial wisdom’. Artificial wisdom seems like a contradiction, but it highlights something important that is being missed in the quest for ethical AI: the reasons matter. ‘Knowing the why’ is an essential part of ethics. Ethical AI will, unfortunately, remain an impossible goal as long as AI remains composed of outcome-oriented black boxes.

Works cited:

Little, Margaret Olivia, “On Knowing the ‘Why’: Particularism and Moral Theory” Hastings Center Report 31, No. 4 (2001): 32-40.

Leave a comment

Filed under Uncategorized