Chair for Statistics and Data Science in Social Sciences and the Humanities (SODA)
print


Breadcrumb Navigation


Content

Automated Decision-Making, AI, and Robots

Team-Lead: Christoph Kern

Team: Sarah Ball, Unai Fischer Abaigar, Sofia Jaime, Patrick Oliver Schenk, Jan Simson

CAIUS: Consequences of AI for Urban Societies

AI systems help to efficiently allocate scarce public resources and are at the core of many smart city activities. Yet, the same systems may also result in unintended societal consequences, particularly by reinforcing social inequalities. CAIUS will identify and analyze such consequences. Using agent-based models (ABM), the effects of AI-based decisions on societal macro variables of social inequality such as income disparity will be analyzed. The data input for these ABMs consists of both Open Government Data and own surveys. The goal is to train AI systems to account for their social consequences within specific fairness constraints; this synthesis of ABM and fair reinforcement learning lays the groundwork for what we call „impact-aware AI“ in urban contexts. Partnering with the Rhine-Neckar Metropolitan Region we investigate smart city applications and their impacts on the local populations. The results will contribute to the research of human-AI interaction and will be condensed into general guidelines for decision-makers regarding the ethical implementation of AI-based decision-making systems in urban contexts.

Project team: Frauke Kreuter, Christoph Kern, Ruben Bach, and Frederic Gerdon

Publications:

  • Kern C., Gerdon, F., Bach, R. L., Keusch, F. and Kreuter, F. (2022). Humans versus Machines: Who is Perceived to Decide Fairer? Experimental Evidence on Attitudes Toward Automated Decision-Making. Patterns. https://doi.org/10.1016/j.patter.2022.100591
  • Gerdon, F., Bach, R. L., Kern, C. and Kreuter, F. (2022). Social Impacts of Algorithmic Decision-Making: A Research Agenda for the Social Sciences. Big Data & Society. https://doi.org/10.1177/20539517221089305
  • Gerdon, F., Theil, C. K., Kern, C., Bach, R. L., Kreuter, F., Stuckenschmidt, H. und Eckert, K. (2020). Exploring impacts of artificial intelligence on urban societies with social simulations. 40. Kongress der Deutschen Gesellschaft für Soziologie, Online.

FairADM: Fairness and discrimination in automated decision-making processes

The project „Fairness in Automated Decision-Making (Fair ADM)“ by Prof. Dr. Frauke Kreuter, Dr. Ruben Bach, and Dr. Christoph Kern deals with discrimination and fairness of algorithm-based decision-making processes (Automated Decision-Making, ADM) in the German public sector. „While ADM systems optimize bureaucratic procedures through automation, their use also raises new social and ethical questions,“ says Prof. Dr. Frauke Kreuter. It is feared that ADM could increase existing social discrimination. For example, ADM systems are already being used in the U.S. to assess the risk of recidivism of defendants in the context of legal proceedings. A particularly sensitive field of application of ADM in the European context is the assessment of job seekers' chances on the labor market, e.g. for the allocation of training resources, which has recently been proposed by the Austrian Public Employment Service (AMS). There is a risk that sensitive characteristics such as gender, age, or marital status are brought into the algorithmic decision-making process and thus influence the distribution of resources. In order to shed more light on this and to empirically investigate methods to correct unfair algorithms, the project develops and evaluates an ADM based on administrative labor market data.

Project team: Frauke Kreuter, Christoph Kern, and Ruben Bach

Publications:

  • Kuppler, M., Kern, C., Bach, R. L., Kreuter, F. (2022). From fair predictions to just decisions? Conceptualizing algorithmic fairness and distributive justice in the context of data-driven decision-making. Frontiers in Sociology. https://doi.org/10.3389/fsoc.2022.883999
  • Kern, C., Bach, R. L., Mautner, H., and Kreuter, F. (2021). Fairness in Algorithmic Profiling: A German Case Study. arXiv. https://arxiv.org/abs/2108.04134.
  • Kuppler, M., Kern, C., Bach, R. L., Kreuter, F. (2021). Distributive Justice and Fairness Metrics in Automated Decision-making: How Much Overlap Is There? arXiv. https://arxiv.org/abs/2105.01441.

Fairness Aspects of Machine Learning in Official Statistics

In this joint project with the German Federal Statistical Office (Destatis), we explore topics at the intersection of machine learning, official statistics, and algorithmic fairness. Research packages in this project focus on the reliable and subgroup-sensitive use of machine learning in official statistics and on investigating coverage and representation errors in various forms of (training) data and their downstream fairness implications. This work is motivated by the increasing use of new data forms that enable a cost-efficient and timely collection of detailed information, but also face strong selectivity problems with regard to inclusion propensities of different social groups. High quality microdata from official statistics may offer the possibility to check data from heterogeneous sources for coverage problems, e.g. by comparing socio-demographic distributions on fine-grained regional scales. On this basis, a systematic data auditing is to be carried out with which coverage problems can be identified and documented. This work meets the increasing significance and availability of new data sources and the call for standardized evaluation and documentation of the quality of training data from a Fair ML perspective.

Project team: Frauke Kreuter, Christoph Kern, and Patrick Oliver Schenk

Improving Inference from Non-Random Data for Social Science Research

New types of data from digital traces and new forms of access to data from administrative processes open the possibility to observe individual and social behavior as well as change in behavior at high frequencies and in real time. The caveat with these new data is their (usually) unknown quality. At the same time, traditional survey data collection vehicles face rising costs, and many social science researchers are tempted to replace expensive probability-based surveys in favor of less expensive data collections. Those are often cheaper because they are collected from volunteer samples of unknown populations with unknown selection and unknown inclusion probabilities. Misrepresentation of societal groups in digital trace data and other alternative data sources can severely affect the utility of such data to both derive valid inference and accurate predictions for a given target population. The usefulness of alternative data sources thus depends on the effectiveness of bias mitigation techniques to correct for self­-selection processes. This research project combines methodology from social science and computer science to account for misrepresentation in data and develops and compares pseudo-weighting and post-processing techniques to improve inference from various data sources.

Project team: Frauke Kreuter, Christoph Kern, Anna-Carolina Haensch, Jacob Beck, Unai Fischer Abaigar

Publications:

  • Kim, M. P., Kern, C., Goldwasser, S., Kreuter, F. and Reingold, O. (2022). Universal Adaptability: Target-Independent Inference that Competes with Propensity Scoring. Proceedings of the National Academy of Sciences of the United States of America (PNAS) 119(4). https://doi.org/10.1073/pnas.2108097119
  • Kern, C., Li, Y., and Wang, L. (2020). Boosted Kernel Weighting – Using Statistical Learning to Improve Inference From Nonprobability Samples. Journal of Survey Statistics and Methodology. https://doi.org/10.1093/jssam/smaa028