“Don’t trust the man…or the machine” | by Dr. Adam Hart | Jan, 2024
[ad_1]
I think one of the worst feelings in the world is being duped or betrayed. It makes us feel vulnerable, stupid and, in the case of digital duplicity, can have real financial and security implications, not only for the immediate economic loss but ongoing identity theft.
If a person betrays us, we can take some direct action, either legal or social justice-wise. In the case of social engineering of fraudulent and duplicitous actions where an anonymous bad actor pretends to assume a role of a trustworthy institution or authority to gain temporary trust while they milk us or our assets, the process of building authentic trust is bypassed in favour of a fake trust which is sufficient (and temporarily necessary) to extract the asset through duplicity.
Trust cannot be confused with authentication. When organisations interact with us, they use a previously established strict MFA protocol and query and response to establish that yes, we are who we say we are. That doesn’t mean they trust us. And they probably never will, nor is it necessary that they trust us to have us as a customer.
Just as the relationship between a successful claimant and an insurance company is not one of “we trust you suffered loss”: but instead “we are satisfied the loss is proven and within the contractual terms of the underwriting policy”. These organisations have an array of protocols and procedures to ensure that their institutional assets are protected.
But trusting individuals often play it by ear, where the flow of authentication is reversed (fake bank staff calls me, not I am calling them) a much lower grade of authentication unfortunately leads to temporary trust, which is then abused by the anonymous bad actor for fraud.
So in the form of a two-way handshake that is an unequal relationship, where we are contacting the institution, we establish a social authentication with the organisation’s representative, but they don’t trust us and we have no other choice but to trust them. When the fake bad actor contacts us, in some sense, we have been historically groomed to trust where the issue is, in fact, one of inaccurate authentication of a fake actor.
Governments set societal expectations of trust
Per the cartoon above, the social order and cohesion and lawful tolerance of every citizen for each other and its institutions, organisations and network of systems do rely on the nebulous concept of trust, not just authentication. If you need a social security number, and passport, a tax file number, a driver’s license or any other document an authoritative issuer that can be expected to be reliable and real must be involved.
If my house is on fire and I dial 999 (000 in Australia), I just don’t authenticate to the number, I expect that something will be done, and in a hurry. I don’t pay the fire brigade directly (except through taxation), I may never have called them before, but I expect that they will come. A trustworthy fire brigade is one of the cornerstones of a civil society, and there are many others like police, military, hospitals, courts, libraries, councils and the like.
If something goes wrong at the instant of a failed or delayed service, we have experienced a local breach of trust. But are we nevertheless groomed to trust them in a global permanent sense? If they failed to come in a timely manner when our house was on fire for the second time will we just use the garden hose? Possibly not.
However, what happens with local trust, good or bad, at the micro level doesn’t matter — what we are educated to provide to governments is a global level of enduring trust in their institutions such that we can be a compliant cohort who pays their taxes, does as directed by authorities in emergencies, don’t declare war on our neighbours or spread conspiracy theories and so forth.
And is it this global trust expectation that opens the way for the fake actors to dupe us?
Should trust apply to the design of machine and system experiences?
With systems and machines, their authentication protocols have been vastly improved with pass keys, MFA and the like. But this is with the flow from me connecting to a system, not a system or machine (or institution!) connecting to me [1].
Bad actors seek to bypass this via social engineering and identity theft. However, as a human user, before I go near any one of these systems, the global trust that has been possibly groomed into me sets up a chain of expectations of me as an individual and my level of passivity with respect to my role in the chain.
Can machines and systems of any kind be trusted? What is the human’s role, the human who has been groomed to believe in a kind of global trust to facilitate societal cohesion?
The old adage that machines are but a tool and it is operator error if they fail is usually invoked but in the case of AI copilots, LLMs masquerading as nannies or autonomous vehicles or drones, that doesn’t hold. We expect we will not be harmed. But does that mean we trust them? Of course not. We have prioritised their benefit accepting their risks, per the terms and conditions. Trust isn’t involved. It’s a legally underwritten tradeoff.
I can’t think of a single instance where I would trust any machine, perhaps the only one is AI detecting a life threatening cancer in which case that is also a copilot, or a co-bot of sorts, and I am undertaking a risk/reward trade-off anyway. I shouldn’t trust that the machine or system will infallibly identify the cancer and save my life.
The fundamental problem is that the Oracle of Delphi doesn’t exist, there is no ultimate trustworthy external authority that is all-knowing and all-seeing of whom we can ask questions, and trust is not something that can ever extend to a machine or a system, no matter how many safety protocols are in place because its context and community is probabilistic and subject to decay, noise and entropy in their data and networks.
Should trust apply to the design of machine and system experiences?
HCTD principles would say, yes, we should design trusted systems that do no harm and serve their intended purpose or role. LLM creators would say, yes, we want you to believe how our LLM can help you in your work, and then you can subscribe and buy it. Then we go all Asimov, and that’s a rabbit hole from which there is no exit.
Following the above brief assessment, we established that:
- Fraudulent and duplicitous agents take advantage of a reversal of the authentication protocol whereby we have been generally groomed to trust institutions and agents of institutions, such that we are compliant citizens and maintain social cohesion; and
- Where the word trust is used we are perhaps better using the word expectation. For example: I don’t trust the justice systems to deliver justice but I expect they will, all things being equal; I expect that the fire brigade will try to extinguish the blaze in my home; I expect the autonomous vehicle won’t run into a light pole and kill me and my family.
As they used to say, “Oh, she/he/they have trust issues”. Is it so much about machine and system experiences that we are expected to trust as a user, or is it more about the global level of trust that has been groomed into us, a priori, before we engage with any machine or system? Irrespective of the legality and the contract terms of use and limitations of liability?
In the case of digital fraud, and bad actors abusing a chain of events, we could strengthen the digital channels to authenticate the bad actor on the other end in the same way the individual is authenticated. If authentication fails, the bad actor doesn’t have a channel. But this is still not trust [2].
I feel that talking about trust in the context of machines or systems is a bad rabbit hole, and there isn’t a way forward unless we choose to give up autonomy and become Matrix batteries or Neuralink babies.
Beyond trust
So, in this sense, trust seems to me to be a very human thing that machines and systems cannot participate in, and best thought about that they can never be trusted [3].
Beyond reasoning that trust in the context of user experiences is simply and profoundly an expectation to not be harmed, or reap the reward for the purchase price or taking the risk; the expectation of experience outcomes also relies on predictability and not abusing the global level of trust that has been groomed into us.
Part of the philosophy of beneficial HCTD therefore could include the harsh reality that the relationship between the machine or systems owners and authors must necessarily be subordinate to the users expectations of global trust that have been created in the natural world for social cohesion [4]. And designed from the ground up accordingly.
A system or machine designed like this would be transparent about whom is operating it, from where it is speaking, and what benefit it is gaining by the user’s participation.
This is a more meaningful way forward than attempting to speak about a fake trust that cannot be created or enforced in any system or machine today due to the lack of legal consequences to the system or machine, as any system or machine today cannot be imprisoned or fined itself.
Given the well-documented motives of BigTech, this is highly unlikely to be realized, but perhaps a brave VC would give it a go.
Footnotes
[1] I never hear from my bank, and when actually once an authentic bank representative called me I didn’t believe it and ignored it.
[2] And that was an internet dream that died many years ago due to privacy, implementation and autonomy concerns (like a fascist Internet).
[3] Just look at AI hallucinations.
[4] Admittedly, there exists parts of the world that are not socially cohesive.
[ad_2]
Source link