Saturday, June 25, 2011

Familiarity trumps understanding (dealing with Neophobiacs)

Arthur C. Clarke, a famous science fiction and futurist once said,
Any sufficiently advanced technology is indistinguishable from magic

I believe we reached that point quite some time ago in our civilisation. While most people watch television, drive cars, use electrical appliances, fly in jet aircraft, use computers and surf the internet, few understand how any of these technologies actually work, or the science that sits behind them.

In some cases many in society actively deny or denounce the science behind their everyday tools while still partaking of its benefits. They simply don't recognise or understand the disconnect.

Over in the Gov 2.0 Australia Group, Stefan Willoughby recently stated, in reference to Eventbrite and other online tools,
I just don't understand why it is so hard to convince people that these tools are valuable and not nearly as risky as they think.

Many of us working in the online space have encountered similar attitudes over the last 10-15 years, often from otherwise highly intelligent people.

I can't legitimately call this behaviour 'risk-aversion'. Those refusing to consider the use of online tools or expressing concern over the 'risks' often have little or no understanding of whether there are any risks (and of what magnitude), or whether the risks of these tools are less than the risks of the tools they are using now.

It is simply a 'fear of things new to me', without any intellectual consideration of the relative risks and benefits. This is a known phobia, Neophobia - the irrational fear of anything new.

I've thought about this issue a great deal over the years and tried a number of tactics to educate people on the uses and actual risks of online tools.

After 16 years I've come to the conclusion that explaining how online tools work simply isn't the right way to overcome irrational fears in most cases.

People don't really want to understand how the tools of our civilisation function - they just want to feel confident that they work consistently and in known ways.

In other words, familiarity trumps understanding.

To begin experimenting with a technology many people simply want assurance that 'others like me' have used it previously in a similar manner safety and successfully. Their comfort with its use then grows the more they use the tool themselves and the less new it feels.

They don't really care about the science or machinery under the hood.

Therefore as internet professionals our task isn't to share knowledge on the mechanics of online tools. It is to build a sense of comfort and familiarity with the medium.

This doesn't mean we shouldn't use evidence, explain how online tools differ and can be used for different goals or effectively identify and mitigate the real risks. This remains very, very important in familiarising people with the online world.

However we should spend less time on the technical details, explaining the machinery of how information is transmitted over the internet, how servers secure data, or how dynamic and static web pages are written and published. These things 'just work'.

Instead we need to focus on helping people use the tools themselves, provide examples of use by others and demonstrate practically how risks are managed and mitigated. Support people in understanding and trusting that each time they push a particular button a consistent result will occur.

Once people are familiar with a particular online tool and no longer consider it new it becomes much easier to move on to an accurate benefit and risk assessment and move organisations forward. Even if they don't really understand how it all works.


  1. This fits classic technology innovation process ('others like me'). Its also why I tend to talk about patterns rather than technology. Of course, at the other end of the spectrum - If I don't know its new or different, then its ok:

  2. I think the question at the core of these matters is about risk management. Experienced managers, and there are many good experienced managers, look carefully at risks. They want to know the likelihood of something occurring and the consequences if it does. Using anyone of a number of likelihood/consequence matrices, these two factors then allow risks to be categorised from very low to extreme. Mitigation is then applied to reduce the risk to an acceptable level - and if it can't be, then the activity might not be undertaken.

    In the case of online tools, there are two risks of which most managers are aware - disclosure of information and "hacking". These are in the news every day. To the uninitiated, these seem incredibly common - in other words, highly likely, even inevitable. I'd bet that most people have first or second hand experience of viruses, identity theft, inexplicable computer crashes, etc. Their experience points to a very high likelihood.

    The consequences are harder to definitively predict. Often they won't be all that bad in a physical sense: things lost, etc. However, reputation damage is highly likely. Reputation risk is very real. If a government agency used Eventbrite or a similar tool and an incident occurred, the headline would read "Government loses data". The tool would be lucky to be mentioned in the 3rd paragraph. It certainly wouldn't be featured in the tweets!

    So managers want to know what the likelihood is, in the first instance. Telling them about the technology won't help. Nor will playing down the consequences. Managers understand reputation risk. Explaining to your boss about how the risks aren't all that serious won't work if it's their bum that is on the line.

    What needs to be done is providing facts about likelihood. Demonstrate that the tool has never been hacked. Show the statistics. Explain who else uses it. When you have established this, then turn to mitigation. What can be done to reduce the risk? Is it possible to lower the likelihood? If less is exposed, will the consequences be reduced?

    It is necessary to speak to managers in their language. First hand experience is useful but not conclusive.

  3. Thanks John,

    I know many people across government would strongly appreciate if an agency provided official facts on likelihood and mitigation for them to use.

    Not everyone has the time or capacity to collect them for themselves (which would also be an enormous duplication of effort). Nor are the facts collected by officers always trusted.

  4. Craig, thanks,
    I realized a long time ago that explaining how things work is seldom useful. I didn't know why and it was a depressing realization because I didn't know about the role of familiarity. Now, I'll have to see if I can apply this new knowledge. 8-)