CFP: 17th Annual Information Ethics Roundtable (Northeastern Law)

17th Annual Information Ethics Roundtable
Justice and Fairness in Data Use and Machine Learning

Friday–Sunday, April 5–7, 2019
Northeastern University
909 Renaissance Park
Boston, MA

Overview

Details

The 17th annual Information Ethics Roundtable will explore the relationship between the normative notions of justice and fairness and current practices of data use and machine learning.

Artificial intelligence is now a part of our everyday lives. It allows us to easily get to a place we have never been before, while avoiding traffic and road work, to communicate with our Chinese friend when we don’t share a common language, and to carry out complex but mind numbing repetitive jobs in factories. But such artificial intelligences can also exhibit what we might call “artificial bias;” that is, machine behavior that, if produced by a person, we would say is biased against particular groups, such as racial minorities. Machine learning using large data sets is one means of achieving AI that is particularly vulnerable to producing biased systems, because it uses data from human behavior that is itself biased. A number of tech companies, such as Google and IBM, and computer science researchers are currently seeking ways to correct for such biases and to produce “fair” algorithms. But a number of fundamental questions about bias, fairness, and even justice still need to be answered if we are to solve this problem. (See below for some examples.)

In the 2019 edition of IER, we seek proposals that approach these questions from a variety of disciplinary perspectives through the lens of information ethics.

Registration is free and the conference is open to the public. Thus, we invite you to attend, regardless of whether or not you are formally workshopping or discussing a paper.

Proposals

Suggested Topics:

  • What concepts of fairness and justice in philosophy and other disciplines are most useful for understanding fairness, equality, and justice in data use and machine learning?
  • To what extent is it possible to operationalize (or computationalize) different conceptions of fairness and justice within different machine learning techniques?
  • Should machine learning based decision-making systems be held to a higher or different standard of fairness and justice before being implemented in industry (e.g. lending) or social services (e.g. child protective services) in comparison to currently accepted practices?
  • What is the role of data scientists and computer programmers in correcting for bias? How can machine learning be used in this role?
  • Not all biases are problematic; indeed, some are very helpful. What sorts of bias are unjust and why?
  • What can modern day programmers of “classifications” learn about avoiding bias from the experience of other disciplines devoted to classification, such as librarianship?
  • What can normative research in other areas – for example, with respect to police profiling or immigration/refugee screening – teach us about when or under what conditions profiling with machine learning is acceptable?
  • What is the relationship between explainability/interpretability in machine learning decision-making and the just use of machine learning in different contexts?

Proposal Requirements

We invite three types of proposals:

  1. Papers: Please submit a 500-word abstract of your paper. If accepted, you are expected to submit a detailed outline of your talk to the Roundtable. This will give your commentator a chance to prepare his/her comments in advance.
  2. Panels: Please submit a 1500-word description of your panel. The description should include: i) description of the topic, ii) biographies of the panel members, ii) organization of the panel. It is a requirement that panels focus tightly on a specific emergent topic, technology, phenomena, policy, or the like, with clear connections between the presentations.
  3. Posters (for undergraduate and graduate students only): Please submit a 500-word abstract of your poster and an outline of the major sections.

Commentators

We are also interested in receiving expressions of interest to serve as a commenter/discussant for another person’s paper. Each author with an accepted proposal will be paired with a commenter who will provide formal feedback and comments during the conference. Expressions of interest should be sent to Katie Molongoski at k.molongoski@northeastern.edu by March 10, 2019.

Sponsors

  • Northeastern Ethics Institute
  • Northeastern University College of Social Sciences and Humanities
  • Northeastern Humanities Center
  • CLIC at Northeastern Law (Center for Law, Innovation and Creativity)