Abby Everett Jaques PhD '18, is a postdoctoral associate in MIT Philosophy and serves as the Ethics of AI Project Lead for the MIT Quest for Intelligence. She is also a Research Fellow in Digital Ethics at the Jain Family Institute, a think tank in New York. Her research is centered in moral and political philosophy and the philosophy of action—the areas where we ask, “What are we doing?” and "What should we do?" She is particularly interested in our relationship to technology and hopes to help society figure out what to do about AI before AI manages to decide for itself.
You’re teaching a new course this spring, “Ethics of Technology,” within MIT’s Department of Linguistics and Philosophy. What inspired you to create it?
In this age of self-driving cars and machine learning, the questions feel new, but in many ways they’re not. Philosophy offers powerful tools to help us answer them. The department already offers great courses on subjects like the ethics and politics of food and climate change; a course on technology is a logical addition. I mean, who hasn’t been creeped out by Facebook’s friend suggestions, or YouTube’s autoplay choices?
Who should take this course?
Anyone and everyone, I hope: Engineering students may want to reflect on the things they create, but non-engineers may want to come because tech is such an important part of our world. We’re all involved in the system and we all have an interest in making sure we get it right.
What sorts of ethical dilemmas will you explore?
I mentioned Facebook. Honestly, we could do a whole course just on that—Facebook is a gateway to questions about privacy and surveillance; fake news and the erosion of an agreed-upon set of facts; social media’s effects on our psychological wellbeing and human connections; the use of big data in politics, lending, criminal justice, medicine, hiring, and beyond.
The key thought is that the promise of technology always comes with risks. So how do we gain the benefits tech offers while protecting against the accompanying harms? Some of those harms are the kinds of things sci-fi movies are built on, with robot overlords and all that, but some appear more innocuous (and are already here). What do we do about it?
In this age of self-driving cars and machine learning, the questions feel new, but in many ways they’re not. Philosophy offers powerful tools to help us answer them.
You’re also teaching “Workshop in Ethical Engineering” with MIT Media Lab postdoc Edmond Awad and philosophy PhD candidate Milo Phillips-Brown during MIT’s Independent Activities Period (IAP). What’s this class about?
Engineers need — and want (as the recent Google walkouts showed) — to understand and manage the ethical dimensions of their work. How do we help? Well, engineering ethics is usually taught by starting either from abstract theories or from professional licensing requirements and regulations. But theories are hard to apply, and regulations don’t get at the real issues.
So we’re doing something different: integrating ethical thinking within engineering practice. Engineers need concrete tools they can use while they are making things to identify, address, and communicate about the ethical aspects of their projects. In this hands-on course we’ll teach an ethics protocol, a step-by-step process that the students in the workshop will apply to projects of their own.
AI poses a diverse set of risks: predictive policing concerns may differ from those related to self-driving cars or personalization algorithms. How can one protocol handle it all?
The protocol is general enough for all kinds of engineering practice, but it can be taught in ways that are tailored to particular coursework. A team of us is creating modules for MIT’s New Engineering Education Transformation (NEET) program, customized for each of the four threads the program offers: living machines, autonomous vehicles, clean energy machines, and advanced materials machines.
But in every version, we focus on the skills needed to understand what one is building, which includes communicating with stakeholders. As the recent experience of MIT researchers who worked with the city of Boston to optimize bus schedules illustrated, changing a system to serve most people better can still create chaos and controversy without adequate communication.
Both the tech industry and the discipline of philosophy have run into criticism that they are too white and too male. Why does diversity matter?
We have good empirical data that diverse teams are more innovative and better at problem solving. So it’s not just a question of fairness, though it is that, too. If we want to do good work, both in tech and in philosophy, we need contributions from all kinds of people. In tech, many of the problems that have led to recent scandals could have been avoided if the teams had been less homogeneous. In philosophy, I’ve seen through my work with students from underrepresented groups how quickly old stalemates dissolve when new voices enter the conversation.
As an MIT doctoral student you co-founded PIKSI-Boston, a one-week summer institute that helps people from underrepresented groups pursue PhDs in philosophy. What have you learned?
Making an effort makes a difference. We graduated our fourth cohort this summer and many of our students are now in graduate school. The tech industry can change, too: when people know they’re wanted, they’ll come, and they’ll stay, and they’ll contribute in ways you can’t anticipate in advance.
This story was originally published as part of an MIT SHASS series on Ethics, Computing, and AI.