Double ABRI Lunch seminar
-
Series
-
SpeakersOlgerta Tona and Lisen Selander (University of Gothenburg, Sweden)
-
LocationVrije Universiteit Amsterdam, HG-02A24
Amsterdam -
Date and time
March 05, 2024
11:00 - 13:30
The ABRI lunch seminar with speakers Olgerta Tona and Lisen Selander (University of Gothenburg, Sweden) is organized by ABRI and the KIN Center for Digital Innovation. This lunch seminar consists of two parts and registration is required.
Speaker: Olgerta Tona
Title
Algo-Political Work: Challenging Injustice In Algorithmic Decision-Making Systems
Abstract
While organizations deploy algorithmic decision-making (ADM) as they pursue greater efficiency and effectiveness, mounting evidence suggests that ADM systems have potential to reproduce injustice to structurally disadvantaged populations. With its focus on preventive technical solutions and accountability frameworks, scholarship has given less attention to what happens after injustice has already occurred: how the systems can be challenged and ways of responsibly initiating actions to address that injustice. Applying the theoretical lenses of political responsibility and brokering in combination to the case of an ADM system deployed in an Australian government agency, the paper introduces the notion of algo-political brokering to explain how moral agents together took it upon themselves to challenge the system and help its victims via brokering initiatives. This form of action is “algo” in that one party in the brokered relation is an algorithmic system, it is “political” in drawing citizen participation and public action to transform systems into engagement with the justice- and fairness-related political questions raised by such systems, and it is “brokering” because it facilitates rectifying the connections between the ADM system and the victims of its unjust decisions. The concept has important applications for research and practice: Scholars can draw on it to interpret implications of current and future political technologies at societal level. Meanwhile, policymakers oriented toward it can better develop and apply proactive measures/rules to govern such systems, and designers become able to “inscribe” a rectification vision in future algorithmic tools.
***
Speaker: Lisen Selander
Title
Algorithmic Discovery Work as Collective Action
“Imagine you stand on an isolated island and see some traces of an accident on the ocean surface, you see pieces of debris floating around, but you are not sure if it is an airplane or a boat, or the severity of the accident, but you understand that you are obliged to do something, and that you need to investigate what happened. That was the sensation that I had”
Abstract
Individuals in civil society are increasingly impacted by algorithmic decision-making but are often unaware that decisions targeting them are delegated to machines. Such unawareness is particularly problematic in the case of faulty decisions and institutional transgressions. How do individuals begin to suspect that they have been targeted by an algorithm and how do they uncover the hidden nature and opacity associated with these systems (their inputs, process, and outputs)? In resource-scarce environments, such as the public sector, noticing such transgressions and tracing them to the algorithm is crucial to protect social justice and public trust in institutions. In this manuscript, we build on the work of von Krogh (2018) on the discovery process of algorithmic decision-making and expand this theory to a non-user perspective exploring algorithmic decision-making from the perspective of the targets of the decisions.