Call - Cooperative Intelligence Challenge: A Composium for Algorithmic Decision Making in Public Administration


iCourts : Danish National Research Foundation’s Centre of Excellence for International Courts at the University of Copenhagen, in coordination with, The Department of Computer Science at the University of Copenhagen (DIKU) and the Independent Research Fund Denmark project PACTA: Public Administration and Computational Transparency in Algorithms.

Background

The rise of algorithmic decision making (ADM) is well documented and has raised numerous concerns about human rights and social justice. From automated cars, search engines, personalised ads to medical diagnosis and public administration of social benefits, many studies have demonstrated how the use of ADM promises significant value as well as risk. Solutions to these risks often downplay the value of ADM or reach for general overarching ethical issues as a response. Such responses often reiterate obstruse notions like trust, fairness, human dignity, etc, and elevate these to “principles” that are thought be guiding for the use of ADM in general. However, this often leaves legal scholars, policy makers and AI developers with just as many questions on incorporating these principles as they had to begin with.

Central to many of these ethical concerns- in one form or another - is how to retain some type of humanness in machine decisions. Humanness, in this perspective, is meant to balance the demands between legal equality – applying a general rule/law to decisions across a broad population - and discretion regarding specific needs or exceptions, such as within a public administrative or judicial context. There are those, like the above regulatory ethics-based approach, that aim to preserve humanness via an argument that such principles should be mandatory (Ethical AI). Others focus on design issues such as user interaction, fairness by design, or maintaining human (or at least human-like) discretion. One popular candidate solution is keeping a human somewhere in the decision-making process, broadly defined as human in the loop (HITL). This approach tries to marry the raw computational power of ADM with the contextual sensitivity of a human decision maker. However, the mere presence of a human in the loop does not guarantee system optimisation, nor that any deficiency will be effectively ameliorated. The specific make-up of a HITL system is still up for grabs and must be tailored to the specific decision making task at hand.

How it works

Rather than approach this issue from a particular academic discipline, Cooperative Intelligence is seeking multidisciplinary teams to address this issue through what we are calling a composium, a 72-hour competition and collaboration of ideas inspired by idea incubation sessions and hackathons. Trying to avoid the generality of traditional academic conferences and workshops, the composium will centre around a concretely defined problem and synthesized literature review in the form of a paper developed by the hosts. This will lay out a scenario of a recommendation/ADM system used for a legal decision-making process in the context of a public administrative body.

Participants sign up as teams (see below for details and for possibilities to apply without a pre-existing team). Over 3 days, the teams will develop their solution to the scenario from data input strategies through the decision-making processes to the final user interaction design. Teams will be provided with a data set with which to develop a working prototype. However, in the interest of producing exceptional ideas teams are more than welcome to use their own data sets.

Call: Cooperative Intelligence Challenge: A Composium for Algorithmic Decision Making in Public Administration (pdf)