Merel Noorman has started as associate professor in Responsible AI

The world of artificial intelligence is developing rapidly, raising important questions. How can we ensure that society retains control over technological innovation? Merel Noorman, recently appointed associate lecturer in Responsible AI at the Amsterdam University of Applied Sciences (AUAS), is researching how we can design and use technology responsibly. With an eye for ethics, democratic values, and practical application, she wants to work with students, researchers, and partners to maintain control over the role of AI in our daily lives.
Grip on technological innovation
In her new position, Merel will build on the innovative research conducted by the Responsible AI lab and the Responsible IT research group. “For me, responsible AI means that we develop and use technology based on a good understanding of what we can and cannot do with it, or what we should be able to do with it. So there is also an ethical dimension to this. How do we make AI accessible to everyone and ensure that it makes a positive contribution to people, society, and the planet?”
Merel emphasizes that society must maintain control over technological innovation. "How do we maintain democratic control over these increasingly complex systems? This requires interdisciplinary, innovative, and practice-oriented research in collaboration with various parties in the field, such as municipalities, companies, social organizations, and citizens. Existing partnerships with other colleges, universities, and practice partners within the Responsible AI lab, such as the RAAIT program and the AI, Media, and Democracy lab, offer plenty of opportunities to further expand this research. The ultimate goal is to offer perspectives for action—especially to our students—with regard to responsible AI in a rapidly changing technological world."
Social challenges
The rapid development of artificial intelligence raises numerous questions about the proper use and responsible applications of this technology. Consider how generative AI, such as ChatGPT, is now influencing how people do their work, for example in education. The increasing use of ChatGPT by students, for example, requires different ways of testing and developing lessons. Existing roles and responsibilities are being called into question and need to be redefined.
The use of AI also carries risks. For example, it can lead to unequal treatment of different groups, and AI systems are often difficult to understand and explain. Moreover, these systems introduce a certain degree of unpredictability. More and more companies, organizations, and individuals are aware of these challenges but are unsure how to use AI responsibly. By conducting research in collaboration with various parties in the field, the Responsible AI lab offers practical solutions in the form of methodologies, tools, workshops, and education. I want to build on this research and put it into practice."
Dependent on technology companies
A related problem that Merel wants to tackle is the increasing dependence on technology companies when it comes to public interests. "More and more decisions and tasks are being left to these smart but increasingly complex systems. Think, for example, of decisions about optimizing traffic flows in cities or about what can and cannot be said in social discussions on social media. But such decisions are often value-laden, and people can have different ideas about what constitutes a good decision.
The complexity of the systems and the opaque dependencies between public and private organizations that manage these systems make it more difficult for society to maintain democratic control over decisions that affect everyone. Although AI can exacerbate these problems, it also offers opportunities to find solutions that benefit society and enable joint decision-making. Responsible embedding of AI in society therefore means AI that works for everyone and is in line with a democratic constitutional state.
Unique research platform
Merel sees her appointment as associate lecturer as an opportunity to further develop her research agenda on Responsible AI in a more practice-oriented context and to share her knowledge and expertise with students at the AUAS. But her appointment also gives her, she says, a unique platform to critically examine digital technology through practice-oriented research and to develop tools that align this technology with social values. “Over the next four years, I want to further strengthen and expand the existing networks of the Responsible IT Research Group and the Responsible AI lab. In addition, I want to further deepen and broaden knowledge and experience with Responsible AI both within and outside this network, so that the focus is on both developing and building responsible AI systems and on the governance of these systems.”
Warm welcome
Pascal Wiggers, professor of Responsible IT, is delighted with Merel's appointment: “I am very pleased that Merel is joining the Responsible IT team as associate professor of Responsible AI. With her interdisciplinary background and knowledge of democratization and (ethical) decision-making around digital technology, she adds an important and necessary dimension to our research in this era of rapid AI development.”
For more information, see Merel's profile page on the AUAS website