Trust

How can we trust new technologies? With increasing attention towards artificial intelligence and machine learning algorithms, the question arises as to whether these new tools for decision-making can be trusted. Machine learning algorithms are often presented as systems that automatically learn based on data. As these algorithms ‘learn on their own’ it has proven difficult, even for their developers, to explain the reasons behind the decisions that are arrived at. Can and should we trust these ‘machines’ we do not fully understand? According to a feature article in Nature, the science and technology journal, the answer is no (at least if you are a scientist): “before scientists trust [machine learning algorithms], they first need to understand how machines learn”.

This answer provides us with our first definition of trust. Trust is achieved by understanding how things work. For practical purposes let us call this the engineering version of trust. While it seems to be a common sense response, it is not the only possible definition. According to sociologist Anthony Giddens trust is exactly a belief held in spite of understanding. He defines trust as: The vesting of confidence in persons or in abstract systems, made on the basis of a leap of faith which brackets ignorance or lack of information (Giddens 1991). We call this definition the sociological version of trust.

Rather than deciding what definition to adhere to, we can follow Science and Technology Studies and ask: Which definition of trust is enacted in practice and to what consequence (for the concept of enactment see Mol (2002) and Law (2004)).

 One response to the issue of trust of machine learning algorithms has been the development of ‘explanation algorithms’. The authors of a popular ‘explanation algorithm’ called LIME write: “Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction (Ribeiro et. al. 2016).” We can view ‘explanation algorithms’ as the enactment of an engineering version of trust. This version of trust directs organizational attention and resources towards the development of an increased understanding of machine learning algorithms. But how do we ensure that people understand these explanations? What if people require additional explanation: would that initiate the development of algorithms to explain the ‘explanation algorithms’ (And so on in a lovely infinite regress)?

Another enactment of trust can be imagined following the sociological version of trust. Giddens outlines his leap of faith as based upon the experience that: Systems do what they are supposed to do (Giddens 2013). Following this approach organizational work and resources could be directed towards the evaluation of machine learning algorithms decisions, rather than questioning how such decisions are made. With this understanding trust is assessed after implementation. But how to evaluate whether an algorithm does what it is supposed to? Who decides what an algorithms is supposed to do? And what if someone disagrees? As trust in this view cannot be established prior to implementation, a central question is how we can experiment with new technologies without destabilizing current work practices?

Whether we adhere to an engineering version or a sociological version of trust, we are faced with continuous work and dilemmas. Rather than ask how we can trust new technologies, we could ask what ideas of trust are at stake in our relationship with new technologies.

 

Author: Bastian Jørgensen

References

 Castelvecchi, D (2016). Can we open the black box of AI. News Feature in Nature. Available at: https://www.nature.com/news/can-we-open-the-black-box-of-ai-1.20731

Giddens, A. (1991). Modernity and self-identity: Self and society in the late modern age. Stanford university press.

Giddens, A. (2013). The consequences of modernity. John Wiley & Sons.

Law, J. (2004). After method: Mess in social science research. Routledge.

Mol, A. (2008). The logic of care: Health and the problem of patient choice. Routledge.

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144). ACM.

Posted in TiP Lexicon.