Insights

The quandary of AI legislation

Read time: 6 minutes

Watching the news lately shows on the one hand devastated families who have been ruined by the AI-based systems used by the Dutch Tax and Customs Administration to identify fraudsters. On the other hand, it shows how someone leaves the hospital visibly relieved, after the MRI scanner’s AI-driven algorithm caught their cancer at a very early stage, making it possible to intervene in time and with minimal damage caused. Both are examples of the use of AI. But also, both would not or barely happen anymore if stricter AI legislation were to be introduced. The Dutch tax authorities would then – fortunately – no longer be permitted to use profiling software, but the flipside is that it might become illegal to use medical data to further improve the MRI scanner’s algorithm. Crafting sound legislation therefore puts us in an impossible quandary: how do we throw out the negative sides of AI while hanging on to the positives? As Professor of Decision Sciences Marc Salomon replies: “It is precisely this quandary that makes putting in place the right legislation so very tricky.”

An interview with Marc Salomon, professor at the University of Amsterdam.

Date15 Nov 2022
Quandary AI Legislation EU AI Act

Legislation: logical principles

The principal legislative foundation for AI in the Netherlands and other EU Member States is the European Regulatory Framework Proposal for Artificial Intelligence. As Salomon points out, “The AI Act is still a proposal at this point and has not yet been transposed into law. The regulation is designed mainly to minimize harm caused by AI algorithms in terms of safety and security, the economy, health, and various social issues, which is obviously a very commendable goal.”

The new legislation divides AI applications into several different categories. Some of these applications will be banned under the new legislation, while others will be classified as either “high risk” or “low risk.” High-risk algorithms require, at the very least, that precautionary measures be implemented. This means, for example, that before the outcomes of an algorithm are communicated, they must be reviewed by a human-being, to be able to intervene before anything might go wrong. You could compare it to a human pilot on a commercial aircraft, who can press the autopilot disconnect button (override switch) if he thinks the autopilot is malfunctioning.

EU AI Act

Legislation: three barriers to implementation

  • Salomon feels certain terms used in the legislation, including ‘fairness’ and ‘algorithmic bias’ are somewhat subjective. “There’s MIT’s ‘Moral Machine,’ for example. Visitors to this website are asked to program the software of a driverless car and give a human perspective on moral decisions made by machine intelligence in various tricky situations. Suppose the brakes on your car fail and you have to swerve to avoid a pedestrian. Do you save the life of the driver or that of the pedestrian? A younger person’s or an older person’s life? The responses reveal that people have different ideas about these scenarios, which of course also depends on their personal situation, including factors such as age and cultural background. It demonstrates that there are no clear-cut answers to the question of what is “fair” or “unfair.” This is the first barrier.
  • A second barrier is applying the law to mathematics. For example, how do you define the concept of bias in mathematical terms? There are many ways of going about this, but no one-size-fits-all solution. Yet you do need to parse “real-world” language into math, because that’s the only way you can prove that an algorithm is biased. However, it will take a lot more research before we can find the right answers to these questions.
  • A third barrier – which is more practical in nature – is the fact that checking whether the algorithm complies with the law takes a lot of work – and you need to go back and revisit this every so often. The software itself may be able to handle this down the line, either during the design of the application or after the fact. But first things first: we still have a ton of work to do before the new AI Act Proposal is enacted.”

Responsible business

Do you like this article so far? Like-minded articles are part of the 3rd issue of our Data and AI in the Boardroom magazine. Get your copy now.

We've asked leading figures in different sectors about how their organizations develop into more sustainable businesses, what goals they have set for themselves, what challenges they face underway, and how using data in innovative ways help them to make responsible decisions.

In this issue you’ll catch a glimpse of the approaches by leading figures at organizations like Schiphol Airport, PostNL and Eosta.

Download your magazine
DAB Magazine 3

What to do right now?

Irrespective of what form the new legislation might eventually take on, Salomon believes organizations would do well to start thinking now about their own algorithms. At the very least, they should know where to find the algorithm, what data it’s using, who is responsible, and what the algorithm does exactly. Salomon: “I would focus on four items:

  1. I would start by thinking about a governance structure for algorithms in which the duties and responsibilities are clearly assigned, similar to the governance structures many organizations already have in place for data.
  2. Secondly, as an organization I would define my own ethical framework, in which I would set out a number of ethical principles that algorithms must fulfill. For example, an insurance company might introduce a rule that premiums for people with disabilities must never be higher than for those without disabilities; that smokers should never pay lower premiums than non-smokers; or that insurance products must never generate excessive profits.” Salomon argues that it’s important not only to document these principles, but also to use clear language in communicating about them: “The algorithms created in workplaces often don’t chime with the company’s values and principles at all.
  3. I believe the third step should be to make an inventory of all algorithms and classify them based on risk – starting with the high-risk algorithm – in an Algorithm Database, which is what the City of Amsterdam has done (see image below). The City has made this database publicly available, but organizations can of course also decide to make their databases accessible only to people within their organization, or to block certain parts from external access.
  4. The fourth step is to think about how you’re going to monitor the algorithm at regular intervals and how you will manage the reports and communications.” Although experience in each of these areas is still somewhat limited, experts all over the world are working hard on finding solutions, which is why Salomon expects we will be seeing major progress over the next while. This also happens to be vital to public acceptance of AI.

About Marc Salomon

Professor Marc Salomon holds several positions, including Professor of Decision Sciences at the University of Amsterdam and Chairman of the Amsterdam Business School. Prior to joining the University of Amsterdam, Salomon was COO at law and notary firm Stibbe (2004-2013), COO and Director of Research at strategy consultant McKinsey & Company (1998-2004) and Director of the Center for Applied Mathematics at Rabobank (1996-1998). From 1996 to 2005, he also served as Professor of Operations Research at Tilburg University. Salomon studied Econometrics at VU University Amsterdam and received his PhD in Business Administration/Operational Research at Erasmus University Rotterdam.