The principal legislative foundation for AI in the Netherlands and other EU Member States is the European Regulatory Framework Proposal for Artificial Intelligence. As Salomon points out, “The AI Act is still a proposal at this point and has not yet been transposed into law. The regulation is designed mainly to minimize harm caused by AI algorithms in terms of safety and security, the economy, health, and various social issues, which is obviously a very commendable goal.”
The new legislation divides AI applications into several different categories. Some of these applications will be banned under the new legislation, while others will be classified as either “high risk” or “low risk.” High-risk algorithms require, at the very least, that precautionary measures be implemented. This means, for example, that before the outcomes of an algorithm are communicated, they must be reviewed by a human-being, to be able to intervene before anything might go wrong. You could compare it to a human pilot on a commercial aircraft, who can press the autopilot disconnect button (override switch) if he thinks the autopilot is malfunctioning.
Do you like this article so far? Like-minded articles are part of the 3rd issue of our Data and AI in the Boardroom magazine. Get your copy now.
We've asked leading figures in different sectors about how their organizations develop into more sustainable businesses, what goals they have set for themselves, what challenges they face underway, and how using data in innovative ways help them to make responsible decisions.
In this issue you’ll catch a glimpse of the approaches by leading figures at organizations like Schiphol Airport, PostNL and Eosta.
Irrespective of what form the new legislation might eventually take on, Salomon believes organizations would do well to start thinking now about their own algorithms. At the very least, they should know where to find the algorithm, what data it’s using, who is responsible, and what the algorithm does exactly. Salomon: “I would focus on four items:
Professor Marc Salomon holds several positions, including Professor of Decision Sciences at the University of Amsterdam and Chairman of the Amsterdam Business School. Prior to joining the University of Amsterdam, Salomon was COO at law and notary firm Stibbe (2004-2013), COO and Director of Research at strategy consultant McKinsey & Company (1998-2004) and Director of the Center for Applied Mathematics at Rabobank (1996-1998). From 1996 to 2005, he also served as Professor of Operations Research at Tilburg University. Salomon studied Econometrics at VU University Amsterdam and received his PhD in Business Administration/Operational Research at Erasmus University Rotterdam.
Watching the news lately shows on the one hand devastated families who have been ruined by the AI-based systems used by the Dutch Tax and Customs Administration to identify fraudsters. On the other hand, it shows how someone leaves the hospital visibly relieved, after the MRI scanner’s AI-driven algorithm caught their cancer at a very early stage, making it possible to intervene in time and with minimal damage caused. Both are examples of the use of AI. But also, both would not or barely happen anymore if stricter AI legislation were to be introduced. The Dutch tax authorities would then – fortunately – no longer be permitted to use profiling software, but the flipside is that it might become illegal to use medical data to further improve the MRI scanner’s algorithm. Crafting sound legislation therefore puts us in an impossible quandary: how do we throw out the negative sides of AI while hanging on to the positives? As Professor of Decision Sciences Marc Salomon replies: “It is precisely this quandary that makes putting in place the right legislation so very tricky.”
An interview with Marc Salomon, professor at the University of Amsterdam.