Bias all the way down

A recent study on racial bias in health algorithms found evidence of bias in a widely used algorithm: Black patients with the same level of risk as White patients were less frequently being defined as in need of additional care by the algorithm.

When asked, ‘how can we prevent things like this in the future?’ senior author of the study Sendhil Mullainathan suggests that more questions need to be asked at the prototype stage.

I disagree: the prototype stage is far too late to start worrying about bias.

By the time we get to the prototype stage, bias is potentially too deeply embedded to be fixed. Post hoc decisions will only work in exceptional cases. Why? Because bias goes all the way down. Bias enters AI way before machine learning starts. Indeed, bias in tech is both foundational and layered. Each level of bias, it would seem, is preceded by another level of bias, and some of the layers will simply be too deep to allow redesign to be successful.

Mullainathan’s solution is machine learning retraining. But this solution is available in only limited circumstances. Choice in data set exhibits bias, and a change in data sets will exhibit a different bias. In some cases, a data set will be available that exhibits a minimal bias or an acceptable bias. But this will not be the case for many data sets, some of which are deliberately biased towards the user. Additional limitations on this type of solution are plentiful.

One level of bias enters at the design stage. Airbnb’s business model requires trust between hosts and guests who have never met. Their designed solution was to manufacture trust through the use of profile photos and real names. Yet this design choice neglected the history of racism in the hotel industry, and famously enabled and reproduced existing racist practices. Their post hoc re-design solutions are improving things, but are moving slowly, and remain not altogether satisfactory. There are, I suggest, a number of reasons why the problem is so pernicious.

Another level of bias enters tech at the coding stage. For example, we encode categories of data for AI to learn. How we define the categories in an AI or machine learning algorithm will determine how the data is sorted. Facial recognition software has been criticized for relying on binary classifications of gender. The resulting errors are predictable, but also fundamentally difficult to resolve. Re-training the AI is not an option, when the categorization has been made salient, but the relevant category has been omitted.

Defining the categories is a choice by human beings, and this choice is not free from bias. Choice of categories is not value-neutral. Choice of categories embeds a history. In these and other ways, determining which categories are salient is a choice, one that can be affected by bias, and one that has further implications for downstream biases.

But perhaps the foundational level of bias in tech is bias in hiring. Because, the diversity in the room (or the lack thereof) has implications for which questions are asked, which solutions are considered, and how the ideas under consideration are evaluated.

Bias in tech is a problem, and something we all agree that we want to avoid. Yet, bias in tech is also ubiquitous, and moreover pernicious. Each layer of bias affects the subsequent layers. Many of the fixes are post-hoc, and seem themselves to be oblivious to the depth of the problem. Bias is deeply embedded in tech as in other areas of enterprise. But when we have bias all the way down, tech is not going to be its own solution. The layers of bias are interconnected and intertwined, and to the extent that we can even identify the source of bias, we will rarely – if ever – be able to disentangle it.


Leave a comment

Filed under Uncategorized

Online Philosophy Part II: On Design

A couple of weeks ago, I posted here about the virtues of online teaching for accessibility. One portion of my argument boiled down to this: online teaching is significantly more accessible than Face to Face (F2F) teaching. Provided that we keep this accessibility in mind when designing our online courses, this accessibility provides reasons to prefer online teaching. Amongst the reasons to reject online teaching in philosophy, two candidates derive from an aspiration to have online courses ‘feel like’ their F2F counterparts. This aspiration, I argue, is a mistake, and a better aspiration is to hold ‘learning outcomes’ and ‘pedagogical aims’ constant across formats. Design your online course to keep the learning outcomes constant, and do so by relying on best practices in online pedagogy.

A couple of comments online and offline have reminded me to foreground and examine some implications of my previous post. One implication is that our latent pedagogical aims may become apparent to us in the process. If your fundamental aim in teaching philosophy is to develop critical thinking, that can be designed into your course. If, however, a significant pedagogical aim (perhaps alongside critical thinking) is to prevent cheating and academic dishonesty, you will find that this, too, has to be designed into your online course. But the good news is that it can be done.

But this response, like much of what I said in the previous post, requires a lot of effort and time at the design phase. So a second implication is that I take my emphasis in design to imply that course instructors ought to be course designers and vice versa, and that course designers ought to be compensated for their work and time. I have been in the position of teaching an online course designed by someone else, and I fell into all of the traps: Too much marking, too many emails, high student attrition, and a low success rate for the course. The experience opened my eyes to the existence of best practices in online teaching, and to recognition that they differ in significant ways from the traditional F2F lecture format.

In my case, part of the overwhelm arose because the course had been designed with a small registration in mind, and an administrator (who was very far removed from the praxis and motivations of the course designer) had significantly changed – then eventually removed – the registration cap. So, a course that was designed for a 40-student registration cap eventually ended up with 178 students registered. My experience was not unique, and the problem persists. A friend was recently hired on short notice as an adjunct, and taught an online course that had not been adequately designed with the instructor’s perspective in mind. When you are hired on short notice as an adjunct (ask me how I know), whether for a F2F course or an online one, you typically do not have enough time to make any significant impact in redesigning the course. You can be bound quite significantly by how the course has been taught previously. In an academic facebook group of which I am a member, a discussion arose about how anyone could be expected to teach an online course that they did not design. Although I have done it, I tend to agree. Or, at least, to the extent that anyone is the teacher for an online philosophy course they become the de facto designer. I ended up on the Ship of Theseus, trying to keep the course in motion while redesigning it constantly.  The course I ended up teaching was very different from the one I inherited, but the university viewed the one I inherited as the *true* course (or Ship of Theseus).

The economic model of higher education that most strongly incentivizes the shift to online teaching and online learning (one in which each student pays to register for each course, but instructors and teachers are seemingly viewed as an expensive administrator rather than playing an essential pedagogical role), does not tend to view ‘course design’ as an ineliminable part of the online course instructor’s role. And the economic model tends not to compensate (or not to compensate adequately) the course design part of the role. I’m careful here not to suggest that compensation is exclusively in the form of payment. Payment and employment as a course designer, inasmuch as it can be distinguished from payment or employment as a course instructor, might also be a mistake. Course release might be appropriate in some cases. A default guarantee that the paid designer will be the instructor – or that they will retain copyright of the online course or online materials- might also be appropriate. Employment or payment or designated time allocated as a course designer for every iteration of course instructorship would be appropriate.

But when none of those options are available, the recognition that the design takes time, the design is a process, and that in a strong sense the design is the course is important. So, to the extent that my previous post advocated for design as solving many of the problems with online pedagogy, I want to foreground that design is a process, design takes time, and design ought to be compensated. And that the employment conditions of online instructors, online course designers, and the economics of online pedagogy are also the product of problematic design choices.

So, I still think that Academic Philosophy should embrace online philosophy courses, and should do so for accessibility reasons. But Academic Philosophy nonetheless ought to seriously examine the conditions under which online philosophy operates. Online philosophy is a reality and should be an opportunity. But it can be designed (at the institutional level, at the departmental level, or at the individual level) badly, and we ought to guard against those possibilities.

1 Comment

Filed under Uncategorized

In Praise of Online Philosophy

Over the past several years, I find myself increasingly coming to the defence of online teaching, and especially to the defence of online philosophy teaching. The defence usually arises in response to concerns falling into one of two categories. Neither of these worries is entirely without merit:

  1. It will not be “the same as” F2F for the students. This objection may include worries that it will not be as rigorous, or as comprehensive, or ‘feel’ the same for the students as F2F.
  2. It will not be “the same as” F2F for the instructor. It will be an overwhelming burden to teach, and it will be without emotional connection. It will take the fun out of teaching.

The two worries are related, and unfortunately push in opposing directions. Many attempts to remedy (1) – such as keeping standard F2F assignment structures or lecture style – lead to an overwhelming burden of constant email and marking for the instructor. The more that we attempt to build emotional connection through personalized email responses and extensive feedback, the greater the burden and time commitment for the instructor. Attempts to remedy (2) can depersonalize the online experience for students, and lose many even if not all of the advantages and innovations of online pedagogy. Both are valid concerns, but both sets of shortcomings can be avoided by designing a course using best practices in accessible, online pedagogy.

I have come to believe that using best practices for online pedagogy, we can achieve an academically rigorous, challenging, and interesting course for students, while maintaining manageable workloads for instructors. The key is letting go of the phenomenological identity requirement. An online course will not be or feel the same as F2F for anyone, nor should it aspire to be. Ultimately, the differences are a good thing, and are the reason we should embrace online pedagogy. 

woman using gray laptop computer on her lap

Photo by on

As university administrations increasingly push online and blended course offerings, philosophy departments are increasingly under pressure to develop and redevelop their online offerings. From an administrator’s perspective, these courses can easily be portrayed as net benefit to the university. After all, once the course and content is up and running, multiple sections can be offered in a given time period, registration caps can be raised or even removed, and the course can be offered well beyond the geographic boundaries of the university. Despite many valid misgivings about online teaching in philosophy, I have come to believe that online philosophy courses do offer a net benefit and many new possibilities, but they ultimately require rethinking philosophy pedagogy from the ground up. In particular, adapting ‘traditional’ face-to-face (F2F) content and format for online is unlikely to achieve the same outcomes or course aims; instead, the switch to online teaching requires us to think explicitly about learning outcomes and course aims for each component of the course, and aim to hold these constant as we shift formats, to use course aims and outcomes as fundamental in our online course design. 

Some F2F formats and assignments survive the shift, but to the extent that The Lecture persists as a pedagogical form, or that a teacher-centred pedagogical approach remains appropriate, these necessarily undergo another transformation in online and distance education. 

The first step is to familiarize yourself with online pedagogy, and with available techniques and practices. At this point, if not before, it should become apparent that a 1 hour lecture video is never appropriate for an online course. (A 1 hour plus didactic lecture may not be appropriate in F2F contexts either, but that is a discussion for another time and venue.) Think in terms of small, bite sized pieces of content such as:

  • Lecture videos of 10 to 15 minutes.
  • Short ‘guided reading’ documents with questions to think about alongside the readings.
  • Discussion prompts.
  • Shorter readings, including academic blog posts.

Your students will not all engage with the content in the same way, at the same time. Design your course around this fact. Allow for individual scheduling, and try to make it easy and clear by making use of tools like checklists and gamefication.

To that end, organize these bite-sized pieces into ‘modules‘, perhaps thematically. These may or may not correspond to something like ‘weekly’ expectations, but should involve roughly the same time commitment and structure for the sake of student /learner expectations.

I have moved towards unproctored  open book quizzes, tests and exams for online students.  I became a fan of multiple choice forms of assessment ‘automatically’ marked by the course management software. It takes time to develop a good (challenging, rigorous) bank of questions, but it dramatically eases the marking burden. Create as large a question bank as possible, and set up the test or quiz to randomize questions. 

And absolutely, fundamentally, design the course with accessibility and diversity of learners in mind. I first taught an online course at a time when I had an infant at home and limited ability to predict my schedule, let alone leave the house. And that turned out to offer me insight into the motivations of students in distance education that is not always available to online educators, and can easily be lost or hidden from the view of online instructors and administrators in distance education.  I think there are genuine accessibility reasons that students choose online courses over F2F even when distance to campus is not one of them. To canvas a few: full time work commitments or irregular part time shift work commitments, primary caregiver for children or parents or spouses, mental health concerns not limited to diagnosed concerns, undiagnosed mental health concerns, chronic illness with unpredictable flare ups. Accessibility is, I argue, the ultimate reason we should embrace online philosophy. And on that basis, accessibility should be at the forefront of our thinking about online pedagogy and design. (This is obviously true in F2F as well as online). 

Of course, online courses can be designed in inaccessible ways if we lose sight of principles of universal design. For example, we should ensure that all videos are captioned, all text is readable by text-to-speech software, and that time limits for quizzes or tests can be adjusted on an individual basis. Better yet – do away with timers, and design thoughtful open book quizzes and assignments where the thought is the effort, and measuring using a timer is not necessary. 

So, embrace the fact that online courses will not feel the same as F2F, and design them in recognition of the uniqueness of online pedagogy. It is not inherently more work for the instructor, nor inherently less personal, and it can be deeply and genuinely engaging to students for whom F2F is not.



Filed under Uncategorized

Bias, AI, and the Ethics of Design

Artificial Intelligence is sometimes proposed as a way of removing or managing bias in decision-making. AI is increasingly anticipated to be involved in the justice system through products like facial recognition software and machine learning for bail decisions. It is increasingly used in the health care system to prevent bottlenecks and streamline some diagnostic processes. And AI is fundamental to self-driving cars, smart houses, and predictive text.

But AI, like other forms of technology, is a product of design. The worry is that AI’s biases are built in at coding, programming and design levels, and these are so fundamental that no amount of redesign or re-programming can erase AI’s biases. 

On one level, bias may be designed into the code through a lack of diversity in the design room.

Further bias is built in to machine learning and AI through the data available to it. If AI is being used to make bail decisions, but bail decisions have a history of penalizing people of color, machine learning will learn this bias through the data.

Unethical and unjust design does not happen by accident. Design and choice are involved at every step, from initial concept through product development, testing, and programming, to marketing and implementation. Inclusivity is also a product of design, choice, and leadership: hiring decisions, choice of product team, research and development, and product testing are all responsibilities and capacities of organizations. Yet the technology and design industries continue to suffer from a lack of diversity. These sets of choices are increasingly recognized to embed biased, unethical and unjust implications into our technology, our practices, and our every-day lives, and to do so at the most fundamental level. Protests and walkouts including the Google walkout in November 2018 and #gamergate draw attention to the severity of problems arising from the lack of diversity in various digital industries, but the problems run deep.

This lack of diversity has important implications for both the industry and the public at large: few if any dissenting voices or opinions throughout the design process, corporate unresponsiveness to sexual harassment and discrimination, and a hostile or toxic work environment for those who make it through. In spite of all of these concerns, we continue to look towards technology as though it were neutral and inherently unbiased. The evidence shows that it is not, and the sooner we acknowledge the multiple layers of bias in AI, the better. 



Leave a comment

Filed under Uncategorized

Cyberstalking and gender

One question that has been bothering me is the variety of ways that cyberspace is gendered. I began thinking about the issue in the context of cyberstalking, so that will be my focus here. In order to get the question started, we need some definitions. ‘Gender’ can be explained as the socially defined norms of masculine and feminine associated with sex characteristics of male or female. ‘Cyberstalking’ can be defined as the internet analogue to stalking, but it has several dimensions and versions. It seems to involve, in almost all cases, a gathering of private information about the victim. So the initial element of cyberstalking is a privacy violation that may be harmful in itself. But it then tends to add the additional dimension of communicating using that private information, either with the victim directly, or via third parties by assuming the identity of the victim in order to induce others to communicate with the victim. Both cases have the net effect of violating the perceived privacy of the victim, and (relatedly) violating the perception of safety of the individual. Hence they likely qualify as the criminal offence of harassment.

One gendered dimension of cyberstalking that is explored briefly by Alison Adam (2002) is whether women and men have different initial expectations of privacy, and therefore whether the level at which their privacy is perceived to have been violated may be different. Adam suggests that women have a lower sense of personal privacy, and relatedly a sense of fewer rights within the private sphere. But there is also evidence that the opposite may be true: young women may be more likely to seek (sexual) attention on the internet. Both ideas are compatible, of course: girls may have a lower expectation of privacy, and therefore find social pressure leads them to easily give up a right that they don’t perceive is there in the first place.

A second gendered dimension of cyberstalking involves the gendered roles played or performed through cyberstalking. The stalker uses the power associated with holding private information to manipulate the victim. So an interesting background is how the powers associated with knowledge and agency have traditionally been associated with a masculine gender (e.g. Lloyd 1984). Many familiar examples of cyberstalking involve male stalkers, with female victims. And even the exceptional case mentioned by Alison Adam of a woman cyberstalking a male relative follow norms of feminine behaviour in that “She experienced considerable shame and guilt over her feelings, i.e., she knew that they were wrong in her culture.” (2002: 138). Entrenched patters of masculine behaviour as active, and powerful, and feminine behaviour as passive and weak seem to be emphasized in the patterns of behaviour functioning here. But a quick examination of more recent examples seems to add more evidence to the theory that cyberstalking has a base level gendered dimension to it. A recent case of a woman cyberstalking another woman nonetheless involved “sexually suggestive” emails, and invoked a form of masculine sexual power. 

So, the worry remains that gender and sex, and in particular, masculine power being used to aggress over feminine (perceived) passivity, is an underlying component in cyberstalking. Does it therefore qualify as a ‘gendered crime’? 


Adam, Alison (2002) “Cyberstalking and Internet Pornography: Gender and the Gaze” Ethics and Information Technology 4: 133-142

Lloyd, Genevieve (1984) The Man of Reason: “Male” and “Female” in Western Philosophy University of Minnesota Press. 

Leave a comment

Filed under Uncategorized

Security vs. Freedom and the Cyberethics debates

The tension between various forms of freedom and various forms of security is at the heart of cyberethics, or so I want to argue. The conflict between net neutrality and cybersecurity is one example of this tension (Ammori and Poellet 2010), but the conflict between free speech and pornography, and the conflict between free speech and copyright are other examples that seem to me to exhibit the same conflict. Each version of the conflict arises on a different interpretation or application of the ambiguous concepts of ‘freedom’ and ‘security’. This conflation of different forms of security, on the one hand, in tension with the conflation of varieties of freedom, on the other, has led to a Canadian file sharing and copyright protection act being called the ‘Protecting Children from Internet Predators Act’, and to a re-invigoration of old debates regarding the harms of pornography and cyber-bullying as conflicting with the benefits of internet freedom generally and free speech in particular. I want to argue that disentangling the variety of forms of freedom from the variety of forms of security is an essential first step in understanding cyberethics.

On the freedom side of the tension we have issues such as free speech, the marketplace of ideas, and net neutrality. A central component of freedom intertwining these claims is that unfettered freedom fosters creativity and economic growth, and censorship in any form is at least undemocratic, and at worst a violation of a fundamental human right. ‘Freedom’ is taken to be a social good, and therefore worthy of maximization and protection. Free speech has historically been defended on the grounds that the good consequences of unfettered free speech outweigh the bad consequences (Scanlon 1972: 205). And defenses of free speech on largely consequentialist grounds are part of the implication of protecting speech for the sake of democracy, the economy, and creativity. If we view the spread of democracy as a good thing, and if we furthermore view the free flow of ideas as essential to the spread of democracy, then the U.S. foreign policy of advocating internet freedom abroad becomes perfectly understandable (Ammori and Poellet 55). The same argument can be made for the spread of capitalism and economic growth. An underlying claim here is that the sharing of ideas is essential to both democracy and innovation, and when the spread of ideas and opinions – even bad ones – is limited through censorship, the personal liberty and autonomy that underlies democratic citizenship becomes impossible. Global examples of constraints on freedom that undermine the potential for democratic citizenship include the limitations on Google, Facebook, and Twitter still in place in China, and the temporary disruptions to social media in Syria, Iran, and Egypt at moments of public political protest over the past several years.

Sharing of all types of information might be interpreted as essential to realizing the democratic potential of the internet. Examples here start with social media for the ease with which it allows the sharing of information within a community, but might also include peer-to-peer file sharing, wikileaks, or the Pirate Party in Sweden and elsewhere. The interpretation of freedom that links the three revolves around a belief in the essential benefits of free speech. The more information (and art, and opinions) that is freely available, the better both individuals and collectives will be at discerning truth, making decisions, and acting on decisions. The consequentialist argument for free speech aims at the promotion of truth and knowledge. That is, the good consequence of exposure to others’ free speech is that we get to test our opinions against contrasting information, and better formulate knowledge and truth. And in particular, practical knowledge is advanced. If the point of free speech is to test our beliefs for truth content (someone will tell us that we are wrong whenever we are), then free internet speech is epistemically important. But moreover if access to truth is important to becoming an autonomous decision-maker, then it will also be important for developing autonomy, citizenship and civic responsibility.

The complication that arises with the advent of internet file sharing is that the free (as in unpaid) dissemination of someone else’s ideas also becomes extremely easy. So, where historically free speech might have been envisioned as protecting an individuals’ ability to test her own opinions against the world by saying whatever she wanted to whoever would listen, technology allows us to freely access (without payment) the ideas of others, and moreover to pass them onwards without modifying or contributing to them in any way. The truth-testing disappears, as do the original lineages of authorship. Hence the legal objections that are increasingly being made to piracy and copyright violations. But the improvement in autonomous decision-making and citizenship remain, hence the tension internal to the notion of freedom with respect to internet free speech.

Net neutrality is one important form that cyber freedom might take. Net neutrality or open internet rules forbid Internet Service Providers from interfering with the flow of information over the internet (Ammori and Poellet 51). In Canada, this tends to primarily take the form of throttling of bandwidth, but with certain employers and ISPs also using IP blocking. Throttling of bandwidth occurs when an ISP restricts bandwidth for a particular user or for certain protocols. IP blocking prevents users of a certain ISP from accessing certain websites.

But notice that the varieties of freedom at stake are not interchangeable. Net neutrality, when used to download the latest Hollywood blockbuster, doesn’t contribute to citizenship in the same way that access to social media during political protest does. And Aaron Swartz’s liberation of academic journal articles and court documents was certainly meant to be more like the latter, but got caught in nets designed for the former. Different arguments are required for protecting different types of freedom, and different values are at stake in protecting each type of freedom. The ambiguous nature of the ideal of ‘freedom’ therefore requires disentangling specific arguments for specific types of freedom before we can tackle many issues in cyberethics. Similar sorts of ambiguities arise on the security side of the tension.

On the security side of the tension, we have issues such as personal security, cybersecurity, and information security being blended into one. While the danger of ‘attacks’ and espionage are real and not to be minimized, there are many other security dangers that arise in cyberspace. As the Amanda Todd case shows, harassment and sexual exploitation are made much easier by social media and multiple dimensions of connectivity. Moor calls this the problem of “greased data”: information that may have technically been public for a long time, now travels much more easily, and is much more difficult to control access to (Moor 1997: 27). Cyberbullying can affect every aspect of an individual’s life, and can follow an individual wherever they go. So, personal security is affected by the ease with which an individual can be attacked and harassed, and especially by the ways in which the anonymity of the perpetrator is protected by privacy provisions.

Personal security can also include information security. On the one hand, identity theft is a common personal security concern and the protection of private information generally (individual, corporate, or state) is an important part of cybersecurity. On the other hand, a common tactic of hacktivism has been to publicly identify and release the personal information of anonymous but unethical users such as cyberbullies and trolls, and additionally to release government secrets as in the wikileaks controversy. So privacy has both positive and negative effects on security. Privacy is an important component of protecting individuals from identity theft or even certain marketing practices, and privacy is also important to corporations and states in protecting various forms of valuable secrets (which may, again, include individual personal information stored in databases). But on the other hand, there are dangers associated with internet anonymity, and especially the impunity with which individuals act under the cover of anonymity. So value of security can provide arguments for and against privacy, and the specificity of the case will be important in determining whether a particular form of privacy is valuable or not.

So while not all issues in cyberethics can be explained in terms of tensions between freedom and security, a surprising number of issues have some aspects that fall under this heading. Hence, in many cases, disambiguation of freedoms and disambiguation of securities will be an important first step in determining which values are at stake, and how best to proceed in protecting them.


Ammori, Marvin and Keira Poellet (2010) ““Security versus Freedom” on the Internet: Cybersecurity and Net Neutrality” SAIS Review Vol 30, No. 2. Pp. 51-65.

Scanlon, Thomas (1972) “A Theory of Freedom of Expression” Philosophy and Public Affairs. Vol. 1, No. 2. pp. 204-226.

Warwick, Shelly (1999) “Is Copyright Ethical? An Examination of the Theories, Laws and Practices Regarding the Private Ownership of Intellectual Work in the United States” Boston College Intellectual Property and Technology Forum 

Leave a comment

Filed under Uncategorized