The brave new world of the Internet of Thing promises smart, connected devices that communicate and share data in order to provide intelligent services to support us. From smart fridges that tell you when you are running out of milk, to driverless autonomous cars, technology is becoming increasingly important and omnipresent in our lives.
While we have also been enjoying this new era of computing technology, we have also started to see examples of how this new technology can pose risks in terms of safety and surveillance to citizens, and how it influenced our moral codes.
An example of this involves smart televisions, where Samsung recently warned about a voice activation feature on some of its televisions.
Samsung may collect and your device may capture voice commands and associated texts so that we can provide you with the Voice Recognition features and evaluate and improve the features. Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captures and transmitted to a third party through your use of Voice Recognition
So is this an Orwellian nightmare or is this simply the new era of the Internet of Things?
Similarly, a recent experiment on Facebook manipulated the extent to which nearly 700.000 Facebook users exposed to emotional expressions in their News Feed [article]. This experiment tested whether exposure to emotions led people to change their own posting behaviours, in particular whether exposure to emotional content led people to post content that was consistent with the exposure— in simple terms, some people were shown content with a preponderance of happy and positive words; some were shown content analysed as sadder than average. And when the week was over, these ‘manipulated’ users were more likely to post either especially positive or negative words themselves. If you think that Facebook is being unethical, and that you shouldn’t be subjected to manipulation, then perhaps you are wrong. Because according to the terms and conditions of Facebook state that ‘user data can be used for data analysis, testing and research’.
Recent news have also been dominated about autonomous cars, and the ethics of the driverless car. In a recent podcast on ethics (part of the BBC digital human series) I listened to Professor Jason Millar who described the following dilemma:
‘Imagine you are driving along in an autonomous car at a pretty good speed and approaching a one lane tunnel. Suddenly out of nowhere a child stumbles into the road. How should the car react? Should it go straight and probably kill the child? Or should it hit the tunnel wall and likely kill you, the passenger?
This is a classic thought experiment of moral philosophy – who should be making that decision?
So, how does this impact our lives? Is privacy loss inevitable? How can we esnure these new technologies are ‘ethical’? Should we be pushing for technology companies to establish ‘ethics boards’, such as Google has, and even so, how do we ensure that these are not just set up to avoid minimise legal risk for themselves?
As robots become more autonomous, the notion of computer-controlled machines facing ethical decisions is moving out of the realm of science fiction and into the real world. So for all of us involved with the design of technology, how do we deal with the moral code of the digital world? How do we program ethics in human computer interfaces and how can we express these in a machine readable format? How do we program ethics in machines? What are the new ethics of the Internet of Things? How can we build processes in the design phase that allow us to anticipate some of the ethical issues that might come up? How do we design technology with clear social implications?
(An interesting twitter hashtags to follow if you want to find out more about these issues is #digiethics)