Though technology has become a somewhat controversial word in recent years, most people would still agree that, overall, our lives have become better – and longer – thanks to the advancements made.
“We can use technology to do good on a scale never seen before. Medical technologies made our lives longer and healthier, transportation technologies enable us to travel and do business or charitable work worldwide, and digital technologies have permeated all aspects of our existence, from education to intimate relationships,” Tomislav Bracanović, senior research associate at the Institute of Philosophy in Zagreb, explains.
But these benefits come with a price, as technologies can sometimes fail us. Every social problem, everything that people can make mistakes with is something that technology can make worse.
“The magnitude and scale of every little decision that we make become so much bigger that it really becomes an issue of making sure we get it right before the mistake happens. This requires more thinking ahead, more consideration of the impact of our actions,” says Brian Patrick Green, director of technology ethics at the Markkula Center for Applied Ethics.
For these reasons, the conversation is turning towards effectively controlling technology and preventing it from being used in ways that could harm people and society as a whole. In other words, tech companies are increasingly embedding ethical considerations in the design of their products.
Technologies – especially AI and related fields like robotics – are rapidly developing and need to be regulated to prevent various types of harm to human beings.
“Big data can be used to profile individuals and thus violate their privacy and safety. Machine learning algorithms are increasingly used in social service and legal processes, but they might be biased and discriminate against certain groups or individuals. From toy robots to care robots, social robotics products may generate adverse psychological effects like alienation, emotional addiction, or deception,” Bracanović outlines.
From toy robots to care robots, social robotics products may generate adverse psychological effects like alienation, emotional addiction, or deception.
The problem is that such pitfalls are not easily anticipated – even by engineers and designers who have the best intentions. Social media is a prime example of this.
“Mark Zuckerberg wanted a way to find other people at Harvard, and within ten years it's causing all sorts of problems such as the spread of misinformation about climate change and other important issues. So, the magnitude and scale are important reasons as to why responsible technology should become a priority now,” Green adds.
This is the rationale behind frequent demands for technology that is “ethical by design” or designed in a “value-sensitive” way. We need to think about the positive effects of new technology, but also about negative impacts and whether these can be anticipated and prevented as early as the design phase.
According to Bracanović, when it comes to AI, things like the development of autonomous cars could create the ethical problem of choosing between the lives of different participants in traffic. The development of social robots will generate ethical problems related to various types of psychological harm such robots are likely to inflict upon humans – especially children and the elderly.
So how do we embed and operationalise ethics in tech? Bracanović believes that ethical reflections should become part of technological development and that its operationalisation should take place on several levels. A certain amount of precautionary thinking is necessary, although not in such a way as to obstruct technological innovation.
There are several different decisions that a company has to make in order to make sure they want to be producing ethical technology products, according to Green:
First, leadership needs to decide that it is important to take these issues seriously.
“First, leadership needs to decide that it is important to take these issues seriously. The second thing is to get the theoretical commitment. The third step is to then operationalise them. What does it mean to be fair? What does it mean to be reliable? What does it mean to be safe and inclusive? That specification stage is difficult, and that's what a lot of companies are running into right now. And so, they hire people to do this kind of ethics work.”
For Green, in the end, it’s all about leadership. Leaders of tech companies don’t necessarily realise how important ethics are in what they’re doing, but they are starting to come around. There are a few people who are powerful and setting a good example, which makes other companies say: “if Microsoft is doing it then we should be doing it, too”. They can also look at the negative examples and try to avoid the same for their company.
Finally, education will play a key role in really operationalising ethics in technology.
Adam Thompson, Assistant Director at the Kutak Ethics Center (University of Nebraska-Lincoln), says it’s not a matter of encouraging individuals to design and use technology with integrity, but rather embedding ethics in the training of computer scientists and engineers. This should serve to support their efforts in ethical development and the use of technology.
“Central to responsible development and deployment of AI is intervening in STEM education and ensuring that all students undergo training in moral reasoning with philosophers and those professionally trained in the study of ethics — including social-political philosophy,” Thompson concludes.