Fair Ai

View Original

Fair AI Insights from Experts with Mr Bernd Durrwachter, Principal, Psiologic LLC

Q. Tell us about your experiences in the technology space, and how that led to your current venture in the field of responsible data and algorithms?

A. My motivation for starting PsioLogic LLC, my own consulting company focused on algorithmic accountability and AI governance, came from my 30 years of experience in technology projects. I have worked across the life cycle of technology, software, and data use, and have witnessed over and again the patterns of risk and lack of responsibility, if not competence. So I set out to focus on translating recent efforts on data governance and information ethics into training workshops, advocating for more focus on algorithmic transparency, which is technically doable despite claims from the AI industry to the contrary.

I am partnering with the Center for Mind of Culture on the digital ethics training side in the form of developing a curriculum for practitioners, which translates the rather theoretical, aspirational, conceptual guidelines and existing niche codes of ethics into actionable recommendations for practitioners in data analytics, computational modelling, and algorithmic decision support & automation. My partnership with CMAC has made me learn to integrate humanities aspects with engineering & science, the cross-disciplinary holistic focus between people of different backgrounds working together to see the bigger picture.

Q. Tell us about some examples of companies or organisations you have seen who have already demonstrated responsible use of data, and why were they able to do so?

A. Without speaking on concrete client experiences for matters of confidentiality, let’s say that I have observed a lot of data maturity in the healthcare and finance industries. And this is largely due to early experiences with data mishaps.

The biggest challenge remaining is information security, not so much data governance in the sense of responsible use and privacy, but investing more effort and competency into safeguarding high value data assets from bad actors (such as identity theft).

Q. What is the scope of the work in your current consulting firm? In other words, what sort of needs are you seeing now in the marketplace, be it now or in the future?

A. I think the societal attitude towards companies using data & technology for decision making affecting broad swaths of society is going to shift from a freewheeling, entrepreneurial “move fast and break things” toward a more cautious and responsible attitude. Especially in the United States, with a new political administration coming in on a platform of more human empathy, I see regulatory efforts emerging. And since the EU has been a thought leader in this space, with its culturally more embedded ethics (due to earlier unfortunate historic experiences to the contrary), and cross-border trade & cultural exchange, I see much more scope for cross pollination internationally.

Q. What are your views on the GDPR and its scope outside the EU?

A. The GDPR is just an intermediate milestone of a long tradition in personal privacy in Europe, spearheaded by Germany since the 1970s. To me as a German, these concepts are nothing new, I literally grew up with them. As is, the GDPR is widely seen as more cumbersome than effective in its intent. But the European Commission is already working on a successor, more inclusive to automated decision processes, driven by machine learning algorithms and implementations of artificial intelligence. People should get engaged in the process; they are listening to industry and citizens’ concerns quite openly.

Q. Tell us about your upcoming projects?

A. As I mentioned, I am currently working on an ethics training curriculum for digital decisions, to train practitioners in data analytics, computational modelling, and algorithmic decision automation (such as machine learning, AI, autonomous systems) to embed ethical choices in their practices. Ideally this will be embedded in the existing technology education curricula.

I am also refining my tools & methods in documenting, explaining, and training on existing algorithmic decision processes in corporations who suffer from high attrition, or a change in leadership having to deal with legacy investments their business still depend on, and who are now faced with having to become more transparent with the functioning of their technology/data based decision processes. This kind of work is often motivated by infosec or compliance/audit mandates.

Q. In your view, what are the top 3-4 trends to watch out for in the space of responsible data and responsible technology? Going forward, which organisations would be the trend-setters in this context?

A. I don’t forecast the future, but I can speak to it where I’d like to see things go aspirationally…

We need to move away from the focus on technology being the solution of human problems. The real solution is in human attitudes, toward one another.

Ethics is about consideration for others, concern for how our behavior impacts others. So understanding other people’s perspectives, positions, and struggles is a humanities aspect. I do hope that we balance the excessive focus in recent years on STEM education with a bit more updated humanities training, to better understand and appreciate the human condition.