There seems to be a consensus in many of the world’s markets that legislative action must be taken to tame the power of technology. However, there’s no united vision of how to design regulatory interventions yet. The immediate challenge facing the field of AI governance is to use AI responsibly, says the World Economic Forum.
Despite the many challenges and ambiguities, there are signs that things are starting to move in the right direction to ensure that technology will do more good than harm.
In 2021, the European Commission released its Artificial Intelligence Act, a comprehensive regulatory proposal that classifies all AI applications under four distinct categories of risks and introduces specific requirements for each of them.
But there are questions pertaining to how effective this act can be and what its potential pitfalls are. For example, the EU classifies deep fakes as a limited risk and simply requires notification of its use. It also does not specify which use case can be applied with what risk level.
“That inherent misconception permeates the regulatory framework and risks to leave outside its realm some essential features and consequences of the actual situation, which needs AI to be understood as a cycle and not as an end-product,” says Fernando Barrio, Senior Lecturer in Business Law at Queen Mary University of London.
“By doing that, it imposes an obligation on designer-developers for situations they cannot control nor anticipate and leaves off the hook deployers that have the capacity, via use and data supply, to affect the outcome of the AI deployment.”
Though we’ve seen progress being made when it comes to protecting people’s data and privacy, the devil is in the details regarding the strength of protections, resources for and commitment to enforcement, incentives for compliance, etc., according to Jolynn Dellinger, Senior Lecturing Fellow in Privacy Law & Policy at Duke Law School.
Some advanced AI systems are designed to evolve in such a way to make some of the consequences unpredictable.
However, the expert emphasises that even though problems remain, well-designed, comprehensive legislation responsibly enforced by the relevant agencies or authorities is still going to be more effective at protecting consumers than the market has proven to be.
Consequence scanning can help solve some of the ambiguities in regulating AI. The consequences are classified as either intended or unintended with the important distinction that intended consequences aren’t always positive, and unintended consequences aren’t always negative. Its use in assessing ethics in tech development varies on a case-by-case basis and depends on the inherent characteristic and definition of a specific technology.
“Some advanced AI systems are designed to evolve in such a way to make some of the consequences unpredictable,” Barrio remarks.
“Some basic systems can be previously assessed for their consequences, and seem harmless, but the actual deployment might result in harmful effects. Therefore, the treatment of AI systems as goods opens up to the unsuitability of consequence scanning for AI systems.”
Barrio warns that the issue is more profound in relation to tech developments as it takes us back to the 1990s when technology development was the main aim of technology policy and was pursued regardless of the actual effect. Therefore, he advocates ex-ante and ex-post assessment of consequence scanning based on principles that have been agreed upon democratically including all the stakeholders.
As regulations develop, companies too are starting to grasp the idea of responsible technology. For example, smaller companies like Signal (a messaging app), ProtonMail (an email service), and DuckDuckGo (a search engine) are reported to have made explicit commitments to promoting privacy. Tech giant Microsoft has indicated that it would provide GDPR level privacy protections to all people regardless of where they live.
Barrio pointing to the growing number of companies addressing responsible technology says that although it is still not clear whether this is because of the understanding of the responsibility embedded with their immense power or if it is just a reaction to the pressure of investors to pursue corporate responsibility policies.
This willingness to offer data protection marks the beginning of a big change in mindset. Now that we have proof that technology is not too complex to be properly regulated, it’s time to add end-users to regulatory frameworks. Ethics and good governance ensure that technology is put to its best use to create a fair, inclusive, and truly democratic society.
This story is part of our Tech ethics series.