Insights

Algorithms pave the way for objectivity

Tempo de leitura: 6 minutes

In recent years, there has been a lot of talk about technologies that allegedly reinforce inequality, as evidenced by books on algorithms like Weapons of Math Destruction. Discussions about facial recognition software that does not recognize non-white people and major issues such as the Dutch benefits scandal have taken center stage in this debate - and rightly so. The reputation of ‘the algorithm’, it seems, has taken a hit. Undeservedly so, if you were to ask Gerrit Timmer: “Algorithms pave the way for objectivity, giving us the opportunity to move in the right direction.”

An interview with Gerrit Timmer, Co-founder and Chief Science Officer at ORTEC

Dato25 oct 2022
Algorithm objectivity

Because of his expertise in this field, ORTEC’s Gerrit Timmer has been asked by the Dutch Data Protection Authority to share his thoughts on a potential ‘Algorithm Watchdog’. But would a watchdog be useful, and how would it operate? Before answering those questions, Timmer has some general remarks on algorithms to share.

Automated decision-making

“Algorithms are not good or bad per se”, Timmer begins, “they’re simply a tool. Essentially, an algorithm is nothing more than a few lines of code. The problems start when algorithms are fed incorrect data and are used improperly, so making ‘algorithms’ in general into a target is an exercise in futility. Rather, we should focus on ‘automated decision-making’, in which algorithms are only one of many moving parts, and generally not the most worrisome part at that. Automated decision-making requires a rather careful approach: you have to put a lot of thought into the design, what it will be used for and what data to use.”

According to Timmer, it is particularly important to test whether automated decision-making will not have undesirable consequences. “And there’s another big caveat when it comes to machine learning algorithms”, he adds. “I do not fundamentally oppose machine learning algorithms, but I do oppose the real-life use of algorithms that have never been tested. After the test, algorithms can keep learning all they want, but all algorithms used in practice should be tested first. In other words, if machine learning produces an improved version of the algorithm, the new version will have to undergo extensive testing before replacing the older, tested version.”

Download the magazine

This article is part of the 3rd issue of our magazine Data and AI in the Boardroom. Get your copy now.

We've asked leading figures in different sectors about how their organizations develop into more sustainable businesses, what goals they have set for themselves, what challenges they face underway, and how using data in innovative ways help them to make responsible decisions.

In this issue you’ll catch a glimpse of the approaches by leading figures at organizations like Schiphol Airport, PostNL and Eosta.

Download your magazine
DAB Magazine 3

Timmer stresses, it’s important to remember the user. “Cars have to pass countless tests before they leave the factory, but you still need a license to drive one. The driver also has to meet certain requirements, and rightly so. Even if the car is safe, an inexpert driver will get into accidents. Similarly, people who play a role in applying automated decision-making should also meet certain requirements.” As Timmer sees it, any future regulator could require parties to draw up policies and determine what requirements such policies should meet. “In a nutshell, there should be requirements for the design and use process, as well as for anyone interacting with the algorithm.”

Shedding light on bias

For decades, people have put a tremendous amount of energy into trying to prevent bugs in systems and algorithms in order to avoid objective errors. The prevailing standard now is that algorithms should be entirely beyond reproach, Timmer says. Bias, however, always has its roots in the data itself. “Medical researchers have known this for some time, but it’s a relatively novel idea when it comes to anti-fraud applications. You have to figure out how to detect bias first, before checking whether you actually managed to do so or not. To do so, you have to pick through results. The government has often been criticized for this, but it’s not that they do not want to do better, it’s that they cannot do better. At the same time, it is important to realize how tremendously difficult things can be. Authorities like the Employee Insurance Agency (UWV) have a duty to review every application, which is a sheer impossibility without the aid of algorithms. Undoubtedly, things go wrong every now and again, but the alternative - having people do all the work - is simply not feasible. And even if it were possible, the outcome might be worse. People are biased, which invariably results in injustices.”

"Bias always has its roots in the data itself. You have to figure out how to detect bias first."

The Dutch Benefits Scandal - one that has vilified many innocent parents in the Netherlands - was an example of institutional discrimination. “It was a tragedy, of course, but the same biases might have been in play well before algorithms were introduced. The advantage of algorithms is that they shed light on biases, at which point they start standing out and you can go about solving them. You can never quite tell why humans do what they do, but algorithms are more objective and transparent, making it easier to identify discrimination in the first place. You have to be aware of biases before you can eliminate them.”

Learning from mistakes

With the Benefits Scandal, the criticism of the algorithm actually diverted attention from the underlying cause, Timmer continues. “People are now casting stones at an algorithm that has ended up with biases that reflect past practices. Admittedly, algorithms should not be biased, but why would they be any worse than a human being with exactly the same biases?”

Timmer is sold on the benefits of algorithms but acknowledges that they are not perfect yet. In some cases, it comes down to choosing between two evils. “You have to pick the lesser evil, realizing that the solution you end up with will never be flawless. You might want it to be, but demanding perfection is pointless. In recent years, we’ve taken the notion that risks should no longer exist too far. Nowadays, the norm seems to be that nothing can go wrong, ever. However, you can never eliminate risk entirely. New technologies are being banned because they’re not flawless, but what if the old method was far worse?”

This fear of making a mistake is also hampering innovation and progress, Timmer believes. “Algorithms can contribute to high-quality decisions. It’s understandable that we’re worried about discriminatory practices: no one wants individuals to fall victim to the whims of an algorithm, but there will be discrimination even without algorithms. Human beings are not perfect. We have the technological capabilities to help us harness computing power to determine and test how decisions are made. Making one mistake with an algorithm is no big deal. It’s a problem when things keep going wrong: you have to learn. The policy should be: try your best to avoid mistakes and fix any mistakes you find."

"Making one mistake with an algorithm is no big deal. It’s a problem when things keep going wrong: you have to learn."

Cycle of continuous improvement

For Timmer, a good place to start would be to look at automated systems as if they were people. “Large companies and institutions have sophisticated policies for dealing with people: they set requirements for job applicants, for candidate selection, for internal training and for decision-making in teams, and they document which decisions need what approval. They’re set up to ensure that no one individual can make a decision that will harm the organization. This approach is relatively rare when it comes to technological systems. In fact, none of these safeguards and procedures exist yet, while technology is supposed to improve our decision-making processes. Designers, users and administrators should look into what kind of problems emerge when people are left to decide, before translating their findings into an automated system.” This requires a structured approach from the very beginning (algorithm design) of the chain until the end (algorithm use), as well as continuous evaluations. “It’s no different with people: you’re ultimately looking for a cycle of continuous improvement. We’re more forgiving of non-perfect decisions made by people and simply do everything we can to gradually improve their decision-making capabilities. We should adopt the same approach for automated decision-making.”

Policy oversight

According to Timmer, regulators should not examine individual algorithms. “There are so many algorithms out there already that checking each one would be practically impossible. It is primarily up to the producers and users of automated decision-making to ensure that their algorithms are developed and used responsibly. This calls for policy, just like the policy these parties have to safeguard the quality of their employees and the decisions they make, including things like a hiring policy, decision-making procedures like the 4-eyes principle and ex-post evaluations.

"With people, we do everything we can to gradually improve their decision-making capabilities. We should adopt the same approach for automated decision-making."

Similar care is needed when developing and applying automated decision making”, Timmer says. “Policy and procedures should preferably be based on sector-specific guidelines, for which regulators can set certain requirements. Companies could be required to mention their policy and enforcement practices in their annual report, for instance. At the same time, algorithm developers and users could use protocols developed for human decision-making, which ultimately serve to maximize the chance of a high-quality decision.”

In developing automated decision-making, involving qualified people is absolutely crucial. “They should have ample knowledge of algorithms and IT at the very least, as well as a healthy amount of relevant domain or business knowledge.”

About Gerrit Timmer

With more than 40 years of experience in applied mathematics within the top echelons of the business world under his belt, Prof. Gerrit Timmer has what it takes to separate fact from fiction when it comes to artificial intelligence, algorithmics and data science. He is widely recognized as an authority on applied mathematics and Operations Research and is one of the country's leading experts in applying algorithmics. To demonstrate to the world the power and potential of applied mathematics, he teamed up with several of his fellow students to found ORTEC in 1981, which has since become a global powerhouse of data-driven decision-making. He is currently ORTEC’s Chief Science Officer (CSO).