Réflexions & Idées

Fair, Accountable & Transparent AI: an interview with Professor Marc Salomon

Durée de lecture: 6 minutes

“Algorithms and the increased demand for transparency are forcing companies to reconsider their ethical compass”

Prof. Marc Salomon is a professor at the University of Amsterdam, chairman of the Amsterdam Business School and co-founder of the Analytics Academy. In this interview, he shares his thoughts on Artificial Intelligence (AI) and recent developments in this field: "Algorithmic decision-making is forcing companies to reconsider their ethical compass. This is not a new development. However, society’s desire for transparent decision-making has, in recent years, become more pronounced. In my opinion, this is partly due to the popularity of AI, but it’s also simply a sign of the times." With society now demanding a greater degree of transparency than ever before, companies find themselves forced to show their hand. Does your algorithm serve only to optimize profits, or does it take social aspects, diversity and the environment into account as well? Transparent algorithms are becoming an increasingly clear indicator of your company’s true values.”

Interview with Marc Salomon, professor at the University of Amsterdam

Date22 juil 2021
Marc Salomon - interview ORTEC

Expected Progress

Large sums of money are being put into AI in the Netherlands and around the world, through both public and private investment. The government invests mainly in fundamental research, while companies focus more on applied research. These investments will fuel tremendous progress, Salomon expects, though the availability of the right, highly trained people will prove a bottleneck. These shortages can be solved by high-quality education and R&D developing smart technology that allows people without a degree in mathematics or computer science to use AI as well.

Explainable AI and FAT - fairness, accountability & transparency - are major talking points nowadays, but who is responsible?

"It is incumbent on the government, businesses and the public to ensure that algorithmic decision-making stays verifiable and fair, and this system seems to be working. Being unable to explain to the public or the government what your algorithm does or which steps you have taken to protect people’s privacy is increasingly becoming a barrier. However, many companies out there still have quite some work to do if they are to meet this demand for transparency.”

Marc Salomon Professor University of Amsterdam

Marc Salomon, professor at the University of Amsterdam

"Being unable to explain to the public or the government what your algorithm does or which steps you have taken to protect people’s privacy is increasingly becoming a barrier."

Ethical issues

Salomon explains the dilemmas companies may face in this area: “By what standard do you assess fairness? Any algorithm that makes decisions discriminates by definition - making decisions is making choices and making choices is discrimination: you do this, but not that. But does it discriminate more than we find ethically acceptable? That's the question you need to ask. And you will have to accept that the concept of fairness can be dynamic. The things we found acceptable 10 years ago are by no means the things we find acceptable today. Moreover, ethical standards can change as soon as you cross a border. This is illustrated perfectly by MIT’s moral machine, which casts you in the role of a programmer coding software for an autonomous car. In this role, you get to decide which priorities the algorithm will set in an emergency. Do you spare the lives of young or old people, black or white people, disabled or healthy people? These preferences and priorities will differ greatly across the world. In some countries, young people are more important than older people, while it is exactly the other way around in others. If you are a multinational company making software for autonomous cars, you have to keep in mind that ethical standards differ all over the world, even though you might not think so at first.”

Moral Machine

The Moral Machine is a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars. The Moral Machine is a project of Massachusetts Institute of Technology and was active from January 2016 to July 2020. Visit the Moral Machine.

Any algorithm that makes decisions discriminates by definition - making decisions is making choices and making choices is discrimination.

Dilemma Moral Machine

How do companies go about creating fair, accountable & transparent AI?

“Realizing that these issues exist is the first step. Next, you have to constantly scrutinize and evaluate your ethical compass. Is maximizing profits your only goal, or do you tell people to only buy your product if they really need it, like Patagonia does? You have to think about your ethics carefully, because transparency is taking over. As I mentioned before, I think this is due in part to the enormous growth of AI, which is often described as a ‘black box’ that you feed data into without truly knowing what happens and what the output of the algorithm means. Governments and the general public are justifiably scared of such a technology, which is why transparency has now become a major issue. Simultaneously, this need for transparency is also a sign of the times, with equality, health and the environment becoming increasingly important.

Digital transition

The digital transformation is still in full swing at many organizations. Larger companies in particular have appeared to struggle to navigate the required changes with all their staff.

“Many companies do try to take their people on board throughout the digital transformation”, Salomon begins, “but some people simply lack the background or interest to familiarize themselves with such a technically complicated subject. Education, however, can play a key role in the success of a digital transformation. This includes academic education, of course, but also secondary vocational education, such as the Dutch MBO. MBO students may not go on to develop their own algorithms, but they would be perfectly capable of using applications that revolve around AI. I would therefore strongly recommend training MBO students in using AI algorithms. If we train and educate people properly, more people will be able to successfully make it through the digital transformation. Fortunately, things are already moving in the right direction. At the University of Amsterdam, we are seeing a vast increase in interest in business analytics, econometrics and artificial intelligence, with student numbers being considerably higher than they were, say, five years ago.”

“MBO students may not go on to develop their own algorithms, but they would be perfectly capable of using applications that revolve around AI. I would therefore strongly recommend training MBO students in using AI algorithms."

How can education help businesses make the transition as smooth as possible?

“At the Analytics Academy, we spent a lot of time during the Big Data for Managers program teaching people about the issues at hand and the people you need to navigate them. By doing so, we hoped to create ambassadors who would disseminate these ideas in their own organizations. Our teaching is strongly project-driven: feel free to bring your own project, and we'll brainstorm how you can leverage AI in your business. And it has worked. For example, we helped the Port Authority Rotterdam and DPG Media, among others, deploy AI within a few weeks. We are currently in the process of developing a training course on the ethical and privacy issues this article addresses. for members of supervisory boards, upper management and anyone else interested in these issues. We will continue to develop innovative courses and programs, as meeting market demands is key for us. Besides, our lecturers are much more interested in doing something new than rehashing old topics that they have already been over 1000 times.”

New 1-Day Masterclass Effective Supervision on Artificial Intelligence

Do you like to gain more insight in the possibilities, trends and risks of data and AI? On October 5, The Analytics Academy organizes a new 1-Day Masterclass Effective Supervision on Artificial Intelligence, designed for supervisory board members, executives and managers. During the masterclass you’ll gain knowledge in the area of AI, with special focus on the legal, ethical and commercial aspects. The masterclass enables you to make informed decisions with regards to using AI in such a way it creates value for your organization.

Join the Masterclass
1-Day Masterclass Effective Supervision Artificial Intelligence

Knowledge and Infrastructure

How has the pandemic impacted education over the past 18 months?

“We were able to experience what does and does not work with regard to online teaching. Remote learning works just fine for cramming the basics, but tutorials in which you try to build or create something together should definitely not be online. I also learned that we should not position the Analytics Academy as an online teaching tool. Rather, we really excel at transferring high-quality knowledge to students face-to-face.

Finally, a geopolitical point: in Europe, we build AI applications primarily on American and Chinese infrastructure. You think that's unwise?

“Unwise? It’s stupid! Microsoft is a U.S. company, which means the U.S. government can request them to disclose all the data they have. We simply cannot stay in control over our own privacy, and one only needs to think back to WWII to remember how wrong this can go. Besides, this development may, at some point, push all high-level jobs to China and the USA, causing us to lose a lot of talent. This should be avoided at all costs. We have to build our own European alternatives for the Cloud and for AI algorithms. However, I am saddened to see that despite projects like GAIA X, Europe has nothing to offer. I am hopeful, though, that more people will realize that we are not headed in the right direction and that this realization will prompt them to build their own infrastructure.”

“Building AI applications on American and Chinese infrastructure may push all high-level jobs to China and the USA, causing Europe to lose a lot of talent. "

About the interviewee

Professor Marc Salomon holds several positions, including Professor of Decision Sciences at the University of Amsterdam and Chairman of the Amsterdam Business School. Prior to joining the University of Amsterdam, Salomon was COO at law and notary firm Stibbe (2004-2013), COO and Director of Research at strategy consultant McKinsey & Company (1998-2004) and Director of the Center for Applied Mathematics at Rabobank (1996-1998). From 1996 to 2005, he also served as Professor of Operations Research at Tilburg University. Salomon studied Econometrics at VU University Amsterdam and received his PhD in Business Administration/Operational Research at Erasmus University Rotterdam.

This interview was conducted by Robert Monné, Practice Lead Organizational Development and Arjan Gras, interviewer.

About The Analytics Academy 

The Analytics Academy is a collaborative venture between ORTEC and the University of Amsterdam. Together, they offer training courses and degree programs in business analytics for businesses, government agencies and non-profits, serving the needs of a wide range of different positions and roles, spanning from experienced ‘technical’ data scientists, data engineers and analysts, to middle management, board members and front-line employees.

The Analytics Academy develops and implements a capacity-building strategy that delivers value to organizations and offers training courses that range from 1-day sessions to year-long programs.

Discover more
The Analytics Academy