The Human Factor in AI Safety

Morteza Saberi

AI-based systems have been used widely across various industries for different decisions ranging from operational decisions to tactical and strategic ones in low- and high-stakes contexts. Gradually the weaknesses and issues of these systems have been publicly reported including, ethical issues, biased decisions, unsafe outcomes, and unfair decisions, to name a few. Research has tended to optimize AI less has focused on its risk and unexpected negative consequences. Acknowledging this serious potential risks and scarcity of re-search I focus on unsafe outcomes of AI. Specifically, I explore this issue from a Human-AI interaction lens during AI deployment. It will be discussed how the interaction of individuals and AI during its deployment brings new concerns, which need a solid and holistic mitigation plan. It will be dis-cussed that only AI algorithms' safety is not enough to make its operation safe. The AI-based systems' end-users and their decision-making archetypes during collaboration with these systems should be considered during the AI risk management. Using some real-world scenarios, it will be highlighted that decision-making archetypes of users should be considered a design principle in AI-based systems.

Knowledge Graph



Sign up or login to leave a comment