Blog: Responsible Artificial Intelligence in Practice
Written by Pascal Wiggers for Amsterdam Data Science
27 Nov 2020 15:00 | HvA Expertisecentrum Applied Artificial IntelligenceAI has created a wealth of opportunities for innovation across many domains. However, along with these opportunities comes unexpected and sometimes unwanted consequences. For example, algorithms can discriminate or lead to unfair treatment of groups of people. This calls for a responsible approach to AI.
Understanding AI in context
Responsible AI means different things to different people. For us, responsible AI starts with the realization that AI systems impact people’s lives in both expected and unexpected ways. This is true for all technology, but what makes AI different is that a system can learn the rules that govern its behaviour and that this behaviour may change over time. In addition, many AI systems have a certain amount of agency to come to conclusions or actions without human interference.
To better understand this impact, one needs to study an AI system in context and through experiment. Next to an understanding of the technology, this also requires an understanding of the application field and the involvement of the (future) users of the technology.
AI is not neutral
There has been much attention on bias, unfairness and discrimination by AI systems, a recent example is the problem with face recognition on Twitter and Zoom. What we see here is that data mirrors culture, including prejudices, conscious and unconscious biases and power structures, and the AI picks up these cultural biases. So, bias is a fact of life, not just an artifact of some data set.