Deep AR Law Enforcement Ecosystem

Dissemination & Communication

DARLENE PROJECT

The DARLENE multi-layered ethical and legal oversight mechanism: Ethics-by-design in action

The EU-funded Horizon 2020 project DARLENE investigates and develops innovative augmented reality (AR) tools to improve the situational awareness and capacity of law enforcement agencies (LEAs) to make well-informed and rapid decisions in real-time when responding to live criminal and terrorist incidents. While combining innovative AR smart glass technology and powerful computer vision algorithms with 5G network architectures, DARLENE partners apply an iterative and inclusive ethics-by-design approach to proactively introduce ethical principles in the development of the DARLENE technology and other tasks. This aims at mitigating ethical risks or preventing them from materialising in the first place. Two years in, the ethics-by-design approach has been producing concrete results in DARLENE, some of which this post sheds light on.

Research ethics in DARLENE

Research activities involving human participants, such as surveys and co-creation workshops with potential end-users, testing sessions of the DARLENE technology with LEAs, and the DARLENE pilots, form a big part of the project and are subject to legal and ethical oversight to comply with ‘research ethics’ principles like reliability, honesty, respect, and accountability.

In the first layer of oversight, DARLENE partners developed a concrete plan for each of the research activities and consulted their ethics committees to obtain official approval for the implementation of the activities. In the second layer of oversight, the project partners responsible for legal and ethical compliance in the project, Trilateral Research and KU Leuven, provided advice and guidance to co-develop sound informed consent procedures, as well as health and safety procedures, with the other partners. In the third and last layer of oversight, partners involved DARLENE’s external Ethics Advisory Board (EAB) to obtain its opinion on research with humans in the project, as well as concrete improvements for the informed consent procedures. As a result, the procedures incorporate three crucial aspects: (1) detailed information on what participation in the respective activity entails, including potential risks and the participants’ rights; (2) the voluntary character of participants’ participation is highlighted; and (3) consent must be continuous to be valid and can be withdrawn at any time.

Fairness and non-discrimination – DARLENE’s data augmentation technique

Bias and discrimination in machine learning (ML) models are well-known issues and widely discussed. For example, the use of biased training data may lead to the inclusion of inadvertent historic bias in the ML system, in a way that biases become ingrained in the technology and can thereby bring about unintended discriminatory outcomes, and even reinforce them. For example, many algorithmic solutions have been found to have a higher rate of flawed correlations (e.g., false positive match for involvement in crime) for minority groups. Although the DARLENE technology does not include any facial recognition features (i.e. features triggering the most severe privacy and discrimination concerns), steps needed to be taken to prevent algorithmic bias and discriminatory output in the project. Therefore, in consultation with the EAB, project partners have been using data augmentation and synthetic data generation techniques to artificially rebalance DARLENE’s training, testing, and validation datasets to make them more balanced regarding race, gender, and age when training for activity/movement and object recognition. This way, the data’s representativeness, relevance, accuracy, and completeness can be ensured and the risks of discrimination minimised.

Upcoming ethics-by-design processes – measuring the stress level of LEA officers

Forming part of the DARLENE wearables, a smart band will measure physiological signals, particularly stress level, of the LEA wearer, which will be processed by the DARLENE ML system to provide personalised content information, delivered to the wearer through the AR glasses. The project’s ethical and legal partners have already identified the DARLENE’s stress level measurement feature as bearing potential privacy and discrimination risks if such measurements incidentally identify an officer’s health status or the content personalisation through the AR glasses puts some officers in more advantageous positions than others. Through dialogue sessions with the technical partners, design choices have been made to link the measured data to pseudonymised user IDs only, instead of identifiable individuals, as well as to process the measured data on the fly exclusively, instead of storing the data. The technical, ethical, and legal partners will hold further dialogue sessions to address all outstanding issues and prepare for the testing of the stress level measurement feature in a confined environment. If you would like to read more about real-time stress level measurement for personalised, context-aware applications, check out this publication by FORTH, which is one of the DARLENE partners.

Building on our work so far and the ethics-by-design results that we have generated, we will continue to follow the DARLENE multi-layered ethical and legal oversight mechanism to update as necessary and monitor all DARLENE research activities and technological developments in light of ethical and legal compliance.

If you would like to learn more about the DARLENE project, please contact us. For updates about the project, sign up for our newsletter via the DARLENE website and follow us on Twitter, Facebook, LinkedIn, and YouTube

Fabienne Ufert,
Trilateral Research