AI legislation at least as impactful as EU privacy rules

Companies desperately need to prepare better, as shown by graduation research project for Master’s programme in Applied AI

7 Sep 2023 13:18 | Faculty of Digital Media and Creative Industries

Jacintha Walters recently graduated from the AUAS Master’s programme in Applied Artificial Intelligence with a final project on a topical subject: the European AI Act, which will come into effect at the end of 2023. She investigated preparations for these regulations among a number of large and small companies. These preparations still fall short in a number of ways, she notes in her thesis and in a paper to be published soon. “Better preparation is desperately needed, because this is going to have just as much impact as the GDPR.”

Walters’ extensive analysis of 15 large and small companies, and a supplementary literature review, show that in many organisations preparations for the AI Act are still far from ideal. Training on the risks of bias in data collection and AI models is still often lacking. Also, while the AI Act identifies specific requirements for technical documentation, there are often no guidelines yet for such documentation.


Walters notes that it is not surprising that companies are not yet adequately prepared, as much is still unclear regarding the European regulations, which are worded in rather general terms. “Companies have been working on this for two years, because the basis for the AI Act was already established in 2021. The Act is very specific on certain matters, particularly when it comes to very high-risk AI, such as models that determine what benefits someone is entitled to. But the Act is not very clear on other matters, such as the high risk involved in incorporating AI into your ‘critical infrastructure’. What the Act will mean in practice on those matters is still ambiguous.”

Walters was supervised by researcher Diptish Dey (CMI HvA), who also co-authored the paper.


Nevertheless, preparation is essential, because the regulations are going to have far-reaching consequences, Walters believes. “This is going to have just as much impact on companies as the GDPR, if not more. Most companies are already using AI in one form or another, but sometimes they don’t even know where or how it is used. If you run your own webshop, you might not even realise that AI is used in a recommendation system, for example.”


What also makes the impact of this European legislation so great is the fact that privacy is something companies can add later on. “What’s tricky about the AI Act is that you have to carry out risk assessments on rights and discrimination, and you have to be able to demonstrate that you’ve thought it all through properly. You also have to use certain methodologies, and companies do not know which ones yet. The GDPR is much more concrete because it deals mostly with anonymisation or pseudonymisation of personal data. So this is going to be a much greater challenge for organisations.”


For her research, Walters highlighted relevant sections of the AI Act and translated them into 90 survey questions. A variety of organisations participated, after which Walters assigned scores for how often a company or organisation performs a particular required action (sometimes, regularly, rarely or always). “The average score was 58 percent, so there is definitely room for improvement.”

Scores were particularly low for technical documentation. “Companies do not yet have protocols for writing technical documentation, whereas the AI Act calls for very specific things in this area,” according to Jacintha.

Knowledge of the risks of ‘internal AI models and data’ is also still lacking. “Most of the companies surveyed stated that they had not encountered any risks in the use of their datasets in the last two years. This is unlikely because all datasets and models involve risks. Therefore, more training is needed in this area.” The companies did, however, score well on keeping their AI models up to date.


“What really surprised me during this Master’s programme is that there are so many risks involved in the use of AI,” Jacintha says. “Bias is a major issue; there are so many groups to consider. But the more variations you add, the less effective a model becomes. Therefore, as an organisation, you have to keep asking yourself whether it’s better to automate things, or whether the risks involved are so great that it’s better to refrain from doing so. There will be many more dilemmas about this in the near future, also for municipalities in the Netherlands that are increasingly using AI.”


Jacintha graduated from the Master’s programme in Applied AI through the Centre for Market Insights, under the supervision of researcher Diptish Dey and professor Jesse Weltevreden. Jacintha collaborated with AUAS researchers Diptish Dey, Debarati Bhaumik and Sophie Horsman on the scientific paper ‘Complying with the EU AI ACT’, which is set to be published soon.

As a result of her insights, Jacintha decided to start her own consultancy: Babelfish, to assist organisations in the responsible use of AI models and support them in preparing for AI regulations.

Companies and government authorities in the Netherlands are currently preparing for the EU Artificial Intelligence Act, which is to come into effect at the end of 2023. This European regulation is intended to curb high-risk use of AI by restricting and even banning certain applications, for example certain forms of facial recognition in public spaces. European law distinguishes between high-risk AI and lower-risk applications of artificial intelligence.