Computer code numbers

Can algorithms promote equality and transparency?

Algorithms drive everything from traffic patterns to deforestation, but they can also amplify intrinsic bias. Now is the time to address the problem.

Algorithms drive everything from traffic patterns to deforestation, but they can also amplify intrinsic bias. Now is the time to address the problem.

Algorithms have become part of our everyday lives. They’re used in everything from business forecasting to traffic management to personalised video streaming recommendations. But the growth in algorithmic decision-making has been accompanied by a rise in concerns about inherent bias skewing results.

At their most basic level, algorithms are simply mathematical equations – a set of rules or instructions to follow in order to a solve a problem, be it a cook-at-home recipe or a computer programme.

Their use gets more complex in relation to artificial intelligence (AI) and machine learning (ML), where computer algorithms are designed to allow automated systems to learn on their own through the huge amount of data now available from smartphones and other devices.

However, algorithms are only as smart as the humans that write their code, which can lead to innovation bias – systematic and repeatable errors in a computer system that can lead to unfair outcomes.

For example, a 2021 study from the University of Southern California demonstrated that Facebook’s online ad systems showed women ads for jobs that required fewer skills and qualifications compared to ads shown to men across a range of employment areas. The paper concludes that the Facebook platform “learns and perpetuates the existing differences in employee demographics.”

Insights ‘humans may never have thought of’

Alf Rehn, professor of innovation, design and management at the University of Southern Denmark, says there are two key benefits to using algorithms – new exploitation and new exploration.

The first refers to the fact that algorithms can simply process data so much faster than humans. So, for example, a company can use them to ensure all its accounting is kept continuously up to date, without any humans involved to slow down the process, thereby making systems more efficient.

The second benefit is somewhat more futuristic.

“Algorithms, when intelligently deployed, can explore relationships and inferences in data that humans may never have thought of, and through this find new paths, products and possibilities for the company,” Rehn explains. “Imagine a hyper-intelligent guide that can show you the real potential of your company, not just the few paths you’ve followed currently.”

In one recent example, University of Michigan researchers used an AI-based drug synthesis programme called Synthia to create ‘recipes’ for potential COVID-19 therapies and found new solutions for making 11 out of 12 compounds. Such novel approaches could help licensed drugs suppliers quickly ramp up production for drugs to treat coronavirus.

Rehn points out that while algorithms are central to decision-making in some companies – in everything from private equity to traffic management – they are still not widespread. “Even in the most ‘algorithm-forward’ corporations, automated or AI/ML-supported decision-making is used in only a tiny fraction of cases,” he says.

But that could be about to change.

“Everything we know about this development indicates that we will see a veritable explosion of this in the future, and I would not be surprised if in the next five years we would see this increasing by a hundred-fold.”

Action needed to minimise the risk of bias

With widespread algorithm-based decision-making in business in its early stages, now could well be the ideal time to pre-emptively address issues of bias.

To avoid AI systems being encoded with the inherent views of the humans who train them, Rehn says that we need to ensure that the teams who design and train algorithmic solutions are diverse and inclusive, and that the data used is transparent.

“The problem arises when there are skews, exclusions or errors in the data. Then the algorithms, through no fault of their own, are not just prone to but even likely to amplify such issues – potentially creating negative consequences, such as fallaciously overestimating the importance of some clients/people over others.”

There also need to be stringent reviews of the data sets used to train systems to guard against bias and “dark data” skew, which refers to the vast amounts of unused data that organisations collect through regular business activity, but don’t generally use for insights.

He suggests there should be algorithm governance processes that provide oversight of what algorithmic colleagues are doing within an organisation. “Tomorrow’s HR will need a Head of AR – Algorithm Resources!” Rehn says.

Key to building a sustainable future

Despite the problems with algorithms, if used correctly, they could help build a more sustainable future. In Australia, researchers used an algorithm and a range of technologies to map the Great Barrier Reef to protect this valuable marine ecosystem from bleaching. Having more data allowed scientists to better understand what was happening and help mitigate any effects.

Zürich-based non-profit GainForest uses AI to halt deforestation. Its algorithms analyse data to measure sustainable land use and unlock donations to forest communities when restoration milestones are met. As well as the obvious environmental benefits, this also encourages farmers to stop felling trees for money.

Algorithms are being increasingly used in sustainable investing, with companies and individuals alike looking for their money to make an impact as well as a profit. AI can help investors choose options that fit their investment strategy and their values.

AI-based systems are also being used in the workplace – in everything from recruiting staff to reducing delivery times and designing buildings. At their best, algorithms promise to make processes cheaper and more efficient, freeing up staff to concentrate on other more complex tasks. Measures need to be taken to ensure that this benefits everyone, not just those with the power to create the algorithms in the first place.

Written by
Natalie Marchant
Contributing Writer at Spoon Agency