New Technologies at Work

The opaqueness of machine-learning (ML) based systems is one of the key barriers to overcome in order to fully benefit from these technologies, especially in high-risk operations such as the railways that require strict regulatory oversight. This challenge is exacerbated in the case of collaborative work processes that involve several human operators, possibly in different occupational domains, and several ML technologies. The project addresses the challenge of designing explainable AI within an overarching design framework for socio-technical integration based on the recently proposed concept of networks of accountability. This concept outlines the interdependencies created between technology developers, organizational and individual users, and regulators due to the continuous process of data production and data use in ML-supported decision-making. The project will develop methods for extracting the different explainability requirements and for supporting their implementation, using a multi-method approach comprising expert interviews, participant observation, design workshops, and work process simulation. The project will be carried out with several partners at SBB and Siemens, focusing on solutions for application domains in traffic control and operations, inspections, and predictive maintenance. For more information, please contact Lena Schneider.
 

JavaScript has been disabled in your browser