Fair Ai

View Original

Deciphering a Responsible Approach to AI

A lot of doomsday prophecies have been said about how modern technologies like artificial intelligence (AI) will make the human race redundant or how it can be misused for the wrong purposes. However, if we take a step back, we will realise that at this point it is not the technology which poses the risk.

There is a crucial need to ensure the regulatory and ethical guidelines are in place in the context of AI so that it acts within certain boundaries. That can be essentially interpreted as the “responsible approach to AI”. Such boundaries should include both a framework of what responsible AI means in reference to that specific organisation/industry and guidelines that guides them how they could go about implementing it.

Institutionalised governance, ethical considerations, transparency in processes and continuous evaluation should be amongst the ingredients while designing, developing and implementing an AI-based algorithm. These elements can be institutionalised through set standards for uniformity, an ethics committee for review, periodic monitoring processes and ethics-based training to the technology developers. This includes periodic reviews, preferably weekly rather than longer gaps, and setting processes in place for tackling biased results. Responsible approach to AI should also take into account the core values of the organisation, so that the technology is aligned with what the organisation itself stands for. It also implies taking a multi-stakeholder approach whilst implementing AI solutions, involving the organisation, society and customers.

All these elements would also help avoid initial exclusion errors or bias in the system, which may occur as the algorithm starts processing historical data sets that inherently included such human decision-based biases. Apart from historical bias, another angle here is of unconscious bias, which has existed for years due to our society norms and are thus embedded in us without us even being aware of it. We need to become aware of such biases so that we can counter them while developing the algorithm solution. Otherwise if left unchecked, all those biases would weave into the machine logic as well. In the worst case, it could even result in unnecessary negative public relations later on. Most companies should avoid such a situation.

And if an organisation is indeed serious about taking a responsible approach to AI, then these elements should be on the plate right now, because the development of new AI solutions is occurring as we speak.

Of course, the regulatory and ethical approach to make AI more responsible should not only be restricted to the diagnostic issues, i.e. highlighting what is wrong in the system. Rather, it should move beyond diagnostic to prescriptive, i.e. finding specific, corrective actions that make the system more transparent, bias-free and accountable, and to predictive, i.e. the system highlighting potential issues in advance before the system is made.

In the past we have known countries or geographic regions emerging as a base for software services, for hardware manufacturing or semiconductor-chip making. In the age of new technologies like AI, we are bound to see specific regions emerge as the harbingers of AI solutions. However, there is also an opportunity for regions to emerge as the harbingers of AI regulations and standards, apart from solutions.

And there would be a strong possibility that the companies using new technologies would prefer sourcing from regions that are front-runners in the regulatory ecosystem around the solution, not just the solution alone.

All in all, the debate around responsible approaches to AI suggests AI systems cannot be developed without some degree of human oversight and intervention. A sole, machine-based approach is a recipe for disaster. The time is now to put in place a framework to ensure a responsible approach to AI, so that all stakeholders benefit from the new technologies and solutions.