- Professor Archon Fung, Harvard Kennedy School
- Mr. Richard Dobson, Founder of Asiye Etafuleni, South Africa
- Mr. Robert Miller, Director of the Minneapolis Neighborhood Revitalization Program, USA
- Dr. Henry Tam, Deputy Director of Community Empowerment Delivery, UK
Thursday, November 22, 2012
Hindsight: Government by the people, for the people but not yet OF the people - | Tweet |
This is a great video regarding innovations in participation, citizen engagement and deliberative democracy, with a panel discussion involving,
It took place back in 2008, but remains extremely relevant and current to the trends of today.
Tags:
community,
edemocracy,
gov2au,
strategy
Wednesday, November 21, 2012
Having a website crash due to high traffic is a failure of management, not load | Tweet |
Today has provided an interesting lesson for several organisations, with the crash of both the David Jones and ClickFrenzy websites in Australia.
But first, some background.
ClickFrenzy is a new 24-hour sale for Australian online retailers starting from 7pm on Tuesday 20 November.
Based on the US 'Cyber Monday' sale, which now attracts over 10 million buyers, ClickFrenzy was designed to entice Australian online shoppers to buy from local online retailers by offering massive discounts on product prices for a short period of time.
The event was announced over a month before it was due to start and has been promoted through newspapers, online and in some retail stores, with the ClickFrenzy team expecting thousands of shoppers to log on, likening it to a "digital boxing day sale".
I've kept an eye on the ClickFrenzy site and signed up to receive an email alert when the sale began.
Just before the sale started I hopped back onto the ClickFrenzy site to see how it was going, and only saw a basic page of text, with no graphics or formatting. Puzzled I tried reloading - and the site wouldn't load at all.
That's when I hopped onto Twitter and learnt from the #clickfrenzy hashtag that the ClickFrenzy site had already crashed from the load and no-one had any idea when it would be back online.
This meant that the list of participating retailers (many of whom had been kept secret) was inaccessible. No shopper knew who had the specials, meaning few sales could occur. Of the retailers that were known to be participating, two-thirds of their sites crashed too (such as Priceline and Myers).
In competition with ClickFrenzy, David Jones had decided to run its own independent 24-hour sale over a similar time period. Their sale, named 'Christmas Frenzy', was to be run from their main website.
How did their launch go? Their site also crashed, and was down for several hours, taking down not only the shopping site but all their corporate information.
So we had two major online sales on the same day from Australian retails, and both experienced crashes due to the volume of traffic.
What was to blame? Both claimed the failure was due to unprecedented demand. So many people tried to get onto both sites that their servers could not cope (the same reason given for the mySchools website issues at launch in 2010 and the CFA website issues during the Victorian fires in 2009).
Let's unpick that reasoning.
The world wide web is twenty years old. Amazon.com is 18 years old. The US 'Cyber Monday' sale is six years old.
David Jones is an experienced retailer, with significant IT resources and has been operating an online store for some time. Their Christmas Frenzy sale was planned and well promoted.
Click Frenzy is being run by experienced retailers as well. They built an emailing list of people interested in the event and also widely promoted the sale. The retailers supporting them are large names and operate established online shopping sites as well.
In both cases the organisers had a wealth of experience to draw on. The growth of Amazon, the US Cyber Monday sales, their own website traffic figures and email list sign-ups, not to mention a host of public examples of how to manage web server load well, and badly, from media sites, social networks and even government sites (such as mySchools and CFA examples above).
There are many IT professionals with experience on how to manage rapid load changes on web servers.
There's scalable hosting solutions which respond almost instantly to fast-increasing loads, such as during an emergency or with breaking news, and 'scale up' the site to support much larger numbers of simultaneous users. (Though in the case of Christmas Frenzy and Click Frenzy a large increase in load was expected, rather than unexpected.)
There's even automated processes for testing how much load a website will be able to bear by simulating the impact of thousands or millions of visitors.
In other words, there's no longer any technical reason why any organisation should have their website fail due to expected or anticipated load.
Load is not a reason, it is a justification.
We have the experience, knowledge and technology to manage load changes.
What the Click Frenzy and Christmas Frenzy failures illustrate is that some organisations fail to plan for load. They haven't learnt from the experience of others, don't invest in the right infrastructure and may not even test their sites.
They are literally crossing their fingers and praying that their website won't crash.
A website crashing when it receives a high level of load that could be expected or planned for is crashing due to a failure of management.
The next time your agency's management asks you to build a website which is expected to have a big launch or large traffic spikes, ask them if they're prepared to invest the funds necessary for a scalable and tested website, built on the appropriate infrastructure to mitigate the risk of sudden large increases in traffic.
If they aren't then let them know to cross their fingers and pray - and that a website crash due to high traffic is a failure of management, not load.
You might even get a Downfall parody video to memorialise the failure - as Click Frenzy received within two hours of their launch crash.
But first, some background.
ClickFrenzy is a new 24-hour sale for Australian online retailers starting from 7pm on Tuesday 20 November.
Based on the US 'Cyber Monday' sale, which now attracts over 10 million buyers, ClickFrenzy was designed to entice Australian online shoppers to buy from local online retailers by offering massive discounts on product prices for a short period of time.
The event was announced over a month before it was due to start and has been promoted through newspapers, online and in some retail stores, with the ClickFrenzy team expecting thousands of shoppers to log on, likening it to a "digital boxing day sale".
I've kept an eye on the ClickFrenzy site and signed up to receive an email alert when the sale began.
Just before the sale started I hopped back onto the ClickFrenzy site to see how it was going, and only saw a basic page of text, with no graphics or formatting. Puzzled I tried reloading - and the site wouldn't load at all.
That's when I hopped onto Twitter and learnt from the #clickfrenzy hashtag that the ClickFrenzy site had already crashed from the load and no-one had any idea when it would be back online.
This meant that the list of participating retailers (many of whom had been kept secret) was inaccessible. No shopper knew who had the specials, meaning few sales could occur. Of the retailers that were known to be participating, two-thirds of their sites crashed too (such as Priceline and Myers).
In competition with ClickFrenzy, David Jones had decided to run its own independent 24-hour sale over a similar time period. Their sale, named 'Christmas Frenzy', was to be run from their main website.
How did their launch go? Their site also crashed, and was down for several hours, taking down not only the shopping site but all their corporate information.
So we had two major online sales on the same day from Australian retails, and both experienced crashes due to the volume of traffic.
What was to blame? Both claimed the failure was due to unprecedented demand. So many people tried to get onto both sites that their servers could not cope (the same reason given for the mySchools website issues at launch in 2010 and the CFA website issues during the Victorian fires in 2009).
Let's unpick that reasoning.
The world wide web is twenty years old. Amazon.com is 18 years old. The US 'Cyber Monday' sale is six years old.
David Jones is an experienced retailer, with significant IT resources and has been operating an online store for some time. Their Christmas Frenzy sale was planned and well promoted.
Click Frenzy is being run by experienced retailers as well. They built an emailing list of people interested in the event and also widely promoted the sale. The retailers supporting them are large names and operate established online shopping sites as well.
In both cases the organisers had a wealth of experience to draw on. The growth of Amazon, the US Cyber Monday sales, their own website traffic figures and email list sign-ups, not to mention a host of public examples of how to manage web server load well, and badly, from media sites, social networks and even government sites (such as mySchools and CFA examples above).
There are many IT professionals with experience on how to manage rapid load changes on web servers.
There's scalable hosting solutions which respond almost instantly to fast-increasing loads, such as during an emergency or with breaking news, and 'scale up' the site to support much larger numbers of simultaneous users. (Though in the case of Christmas Frenzy and Click Frenzy a large increase in load was expected, rather than unexpected.)
There's even automated processes for testing how much load a website will be able to bear by simulating the impact of thousands or millions of visitors.
In other words, there's no longer any technical reason why any organisation should have their website fail due to expected or anticipated load.
Load is not a reason, it is a justification.
We have the experience, knowledge and technology to manage load changes.
What the Click Frenzy and Christmas Frenzy failures illustrate is that some organisations fail to plan for load. They haven't learnt from the experience of others, don't invest in the right infrastructure and may not even test their sites.
They are literally crossing their fingers and praying that their website won't crash.
A website crashing when it receives a high level of load that could be expected or planned for is crashing due to a failure of management.
The next time your agency's management asks you to build a website which is expected to have a big launch or large traffic spikes, ask them if they're prepared to invest the funds necessary for a scalable and tested website, built on the appropriate infrastructure to mitigate the risk of sudden large increases in traffic.
If they aren't then let them know to cross their fingers and pray - and that a website crash due to high traffic is a failure of management, not load.
You might even get a Downfall parody video to memorialise the failure - as Click Frenzy received within two hours of their launch crash.
Tags:
design,
development,
gov2au,
management,
policy
Monday, November 19, 2012
But we're the experts! Why the 'internal expert' democratic governance model is gradually failing and what can be done about it. | Tweet |
Most public sector agencies are designed as centres of expertise on policy and service delivery.
By gathering, or training, experts in a given topical area and marshalling and directing this expertise to resolve specific issues and goals, agencies have been designed to design and deliver effective and sound policy and service delivery solutions to governments for communities.
Sure these powerhouses of expertise consult a little on the fringes. They access academia and business to provide 'fringe' expertise that they cannot attract into their agencies and engage with NGOs, community groups and individual citizens to check that service delivery solutions meet the 'on-the-ground' needs of specific communities.
This is necessary for fine-tuning any policy or service solutions to meet specific needs, where cost-effective to do so.
However the main game, the real policy powerhouse, are the government agencies themselves, who take on the roles of researcher, think tank, gatekeeper, designer and deliverer through their central pool of expertise.
This is a longstanding - even 'traditional' approach to governance. It was designed and adopted in an era where geography, communication and education limited the extent and access to expertise in a nation or community. Where, often, many people were disempowered politically and economically through limited access to information and knowledge.
Consider Australia at Federation in 1901.
The new Commonwealth Government, in addressing national issues, had to serve a population of 3.7 million people (smaller than Victoria's population today), with an average age of 22 years old, dispersed over 7.7 million square kilometres.
There was no telephone, radio, television or internet, however the overland telegraph, which gave Australia high-speed communication with the world, was 30 years old, having been completed in 1872 and extended to Perth in 1877. While most communication travelled at the speed of a fast horse, train or ship, it was possible to share information across Australia the speed of light, though at the rate of only a few messages at once. This telegraphic networked served as Australia's communication backbone for almost another fifty years, until telephones became popular after World War II from 1945.
In 1901 Australia had one of the highest literacy rates in the world (80%) with school compulsory to 13 years old, though attendance was not enforced, many remote communities didn't have access to schools and Indigenous Australians were excluded. Literacy meant basic reading and writing, with the ability to add and subtract - the books issued to 13yr olds today would have been far beyond the ability of the majority of students in 1901.
The majority of Australia's 22,000 teachers hadn't attended a teacher's college, generally serving an apprenticeship as 'pupil teachers' and few 'technical colleges' existed to teach advanced students.
In 1901 there were only 2,600 students at Australia's four universities (0.1% of our population) and CSIRO wasn't even an idea (formed 1926). There wasn't a record of how many Australians had received a university education until the 1911 Commonwealth census, which reported 2,400 students at university and 21,000 'scholars' (with their level of education undefined).
In this environment, expertise was rare and treasured. Governments employed the cream of Australia's graduates and were almost the sole source of expertise and thinking on policy issues that the new nation had to address.
The 'government as centre of expertise' model made sense, in fact it was the only viable way to develop a system capable of administering the world's smallest continent and one of the largest, and most sparsely populated, nations.
Jump forward a hundred and ten years, and Australia is one of the most connected nations on the planet, with 98% of our 22 million citizens having instant access to the world through the internet and virtually every Australian having access to telephones, radio and television.
Education is compulsory to 15 or 17, with the majority of teachers tertiary educated and school attendance strictly enforced - including for Indigenous Austalians. We have about 41 universities, ten times as many as in 1901, as well as over 150 other tertiary institutions, with over 21% of Australians having received tertiary education.
As a result, the 'government agency as expert' model is failing.
Across our population there's far more expertise outside of government than within. Governments struggle to attract and retain talent in a global market, hamstringing themselves by restricting employment to Australian citizens, while the commercial sector internationally will happily take Australia's best trained minds and put them to use elsewhere in the world.
Despite this, government's basic model has barely changed. Agencies are structured and act as 'centres of expertise' on policy and service delivery topics.
True, there's a little more interaction with academia, with business and even with citizens. However agencies remain structured as 'centres of expertise' for policy design and service delivery, designed to serve communities with limited communication or education - limited capability to do for themselves.
This model may remain effective in certain parts of the world, in nations where literacy is low, geography remains a barrier and communication infrastructure is weak - like Papua New Guinea, regions in the Amazon and some other remote areas and developing nations.
However in developed nations, with high literacy, substantial tertiary education, where geography is no limit to communication and access to media and internet are almost universal, does the approach retain the same merit?
A 'centre of expertise' approach also has many downside risks which are necessary features of the system, providing the separation and public trust required by governments to operate in this way.
For example, when government agencies structure themselves as the experts, they need to maintain a level of mystique and authority to justify public trust that they are providing the best advice and solutions.
Just like a religion has special rituals, and restaurants rarely let you see how their kitchens operate in order to preserve the trust of their worshippers and customers, government agencies conceal their day-to-day operations from scrutiny to maintain a mystique of expertise and create a clear separation between the 'agency business' of government and the external workings of society.
This often involves keeping policy development processes hidden behind a wall of secrecy, bureaucratic language and bizarre semi-ritualistic procedures.
This is also why, despite FOI and other approaches, governments largely remain secretive about their processes for designing policy. They can be messy, which may reduce trust and call into question the expertise of the agency or government.
As a result, if you're not a policy expert, in most countries it is unlikely that ordinary citizens have much knowledge of how an agency has developed a given policy, who was involved (formally or informally) or why certain decisions were reached. These activities are done behind closed doors - in the confessional, behind the kitchen wall, backstage - with all their inherent messiness, testing of 'dangerous' ideas and economic modelling of who wins and who loses with any specific decision treated as confidential and secret knowledge.
This leads to a second issue and a rationalisation. As the public isn't aware of how a specific policy or service was developed, generally seeing only the final 'packaged' solution, agencies can reasonably and logically argue that the majority of the public have little to add to the policy process.
'Expert' policy officers can argue that; the public doesn't have sufficient context, doesn't have all the facts, doesn't understand the consequences of decisions or the trade-offs that had to be made.
And of course this is true. Because the public were not part of the process, they did not go on the same journey that the public sector 'experts' went on to reach a particular policy conclusion.
The public is told 'trust us, we're the experts', and again this is indeed true. Only the policy insiders had the opportunity to become the experts, all others were kept outside the process and therefore can never fully understand the outcome.
Success in implementing policies and services relies on the public trusting agencies and governments to be the experts. To trust them to do their jobs as the 'experts' who 'know better' than the community. However the 'secret agency business' of policy and service design can feed on itself. Government may attempt to keep more and more from their citizens as, from their perspective, the more they reveal the less the public trust agencies.
This is a tenuous approach to trust in modern society, where scrutiny is intense and every individual has a public voice.
If the agency policy experts, in their rush to meet a government timetable, overlooked one factor, or misunderstood community needs, a policy can quickly unravel and, like an emperor with no clothes, the public can rapidly lose faith and trust in government to deliver appropriate solutions.
In this situation it is rare that an expertise-based agency or government will be willing to publicly admit that they misunderstood the issue, convenes the people affected and expertise in the community and discusses it until they have a workable solution. It does happen, but it is the exception not the rule.
Instead, the first reaction to external scrutiny is often to protect their position and justify why the public should trust them. They may draw the wagons round, either seeking to bluff their way through ('you don't understand why we made these decisions, but trust us'), 'hide' the failure under a barrel (it was a draft, here's the real policy), or to tell the public that the agency will fix the issue ('trust us this second time').
In some extreme examples, governments may even cross lines to protect their perceived trust and reputation - concealing information or discrediting external expertise in order to justify the expertise inside their walls and try to regain public trust.
There are other risks as well to the government as expert model. Policy experts, who have worked in the field a long time, may not accept the expertise of 'outsiders' who appear to be interloping on their territory ' who are they to tell us what we should do'. Agency experts may become out-of-date due to not working in a field practically for a long time, they may hire the wrong experts, or simply not hire experts at all and attempt to create them.
In all these cases, agencies have a strong structural need to preserve public trust and their integrity - which may often exhibit itself as 'protecting' their internal experts from external scrutiny, or otherwise attempting to prevent any loss of reputation through being exposed as providing less than good advice.
These risks mean that the government as expert model is under increasing pressure.
A more educated and informed citizenry, with high levels of access to publication tools means that every public agency mistake and misstep can be identified, scrutinised, analysed and shared widely.
Each policy failure and example of a government agency protecting itself at the expense of the community. Each allegation of corruption, fraud or negligent practice - whether at local, state or national level - contributes to a reduction in trust and respect that affects most, if not all, of government.
Of course this government as expert model hasn't completely failed. There are areas that the community isn't interested in, where the government is indeed the expert or where it would be dangerous to release information into the public eye - where we do have to trust the governments we elect to act in our best interests without the ability to scrutinise their decisions. These areas are shrinking, but some are likely to always remain.
However the model started fraying around the edges some time ago and we see it represented today in the increasing lack of respect or trust in government.
Citizens don't compartmentalise these failures in ways that governments hope they will, often seeing them as systemic failures rather than individual issues.
As a result citizens trust governments less, have less faith that governments can develop appropriate policies and services and turn even more scrutiny onto agencies - even when unwarranted.
The failures of the government as expert model are only likely to grow and extend, with greater scrutiny and greater pressure on agencies to perform. This, unfortunately, is likely to lead to more errors, not less, as governments seek to make faster decisions with fewer internal resources, less experts, less time.
So how does this failing model get resolved? What are the alternatives approach that governments can adopt to remain effective, relevant and functional in a society with high literacy, education, access to information and almost universal capability to publicly analyse government performance?
In my view the main solution is for governments, except in specific secure topics, to turn themselves inside out - changing their approach from being policy and service deliver 'centres of expertise' to being policy and service delivery 'convenors and implementors'.
Rather than seeking to hire experts and design policy and services internally, agencies need to hire people who can convene expertise within communities and from stakeholders, marshalling it to design policy and codesign service and focus on supporting this process with their expertise in structuring these approaches to fit the realities of government and implementing the necessary solutions.
This approach involves an entirely transparent design process (for both policies and services), making it possible to inform and engage the community at every step.
Within this approach, government agencies gain the trust of the community through managing the process and outcomes, not through being the expert holding the wisdom. The community doesn't need to trust a black box process, it comes on the journey alongside the agency, developing a deeper and richer trust and support for the outcomes. As a result, the energy of the community is aligned to support the agency in making the policy succeed, rather than being disengaged, or actively opposing the policy and leading to failure.
Government becomes an active participant and enabler of the community, reducing the cost of communicating information and influencing citizen ideas as citizens are influenced through their participation or observation of the proces.
This approach does require substantial education - both within government and within the community - to ensure that all participants are aware and actively engaged in their new roles. It can't, and shouldn't, be introduced into all agencies overnight and there are some policy requirements where security should take precedence and processes cannot be as fully revealed.
However the approach could be introduced relatively easily (and some governments around the world have done this already). For instance, a government could select three to five issues and put together taskforces responsible for taking a collaborative approach to deliver specific policy or service solutions.
These taskforces provide a 'public secretariat' for managing community and stakeholder involvement, acting as facilitators, not operators, to marshal community engagement in the design process.
This could even be done at arms length from a government, with taskforces drawing on expertise from outside public sector culture to avoid accidental imposition of elements of a central command and control model, provided they include core skills from the public sector necessary to ensure the policies or services developed can be effectively and practically implemented by government.
This process would test the public policy design model, capturing learnings and experiences - not from a single process run once, but from an parallel process, with multiple taskforces running at the same time to test the real-world impact in a reduced timeframe. Learnings from the taskforces would be aggregated and used to build a more complete understanding of how to adopt the approach more widely within agencies.
Provided governments committed to the outcomes of these processes, and selected issues of interest to the community, this approach would provide solid evidence for the effectiveness (or otherwise) of a facilitation/implementation, rather than an internal expertise, approach to governance.
Alongside this moderated approach to public policy development, a complementary citizen-led policy engagement approach could be introduced using an ePetition or ePolicy methodology.
Mirroring the approach taken in other jurisdictions, where the community is given a method to propose, develop and have debated in parliament, citizen policy and legislation, this would provide another route for citizens to engage with and understand the complexity of policy development and build an alternative route for high-attention issues for which governments are not prepared to take immediate action.
This approach has been adopted in several forms overseas, such as the ePetition approaches in the US and UK, where any petition with sufficient votes receives the attention of the government and, in the case of the UK, is debated in Parliament.
A more rigorous model is used in Latvia, where citizens are supported to design actual legislation online and, if they can marshall sufficient support, their bills go to parliament, they get to speak on them and the parliament votes them up or down.
This last model is beginning to be introduced in Scandinavian countries and Switzerland has long had a similar process, pre-dating the internet, which allows greater participation by citizens in decisions.
So, in summary, the 'internal expert' model designed for use in nations with limited literacy and education and poor communication, is failing to serve the needs of highly educated and connected nations, such as Australia, leading to increasing citizen concern and plummeting trust in governments.
To address this, governments need to adapt their approaches to suit the new realities - environments where there are more experts outside of government than inside and where citizens can universally scrutinise governments and publish facts, analysis and opinions which serve to increasingly force governments into difficult and untenable positions.
The key changes governments need to make is to turn themselves 'inside out' - exposing their policy and service delivery design and development processes to public scrutiny and engagement and becoming facilitators and implementors of public policy, rather than the expert creators of it.
While some areas of governance need to remain 'black boxes', many can be opened up to public participation, building trust with communities by bringing citizens on the journey with agencies to reach the most practical and appropriate solutions.
This will rebuild trust in governance and allow governments to improve their productivity and performance by tapping a greater range of expertise and building an easier path to implementation, where citizens support agencies, rather than oppose them.
Friday, November 16, 2012
Are organisations failing in their use of social media and apps as customer service channels? | Tweet |
Guy Cranswick of IBRS has brought my attention to a media release about a new report from Fifth Quadrant, a leading Australian customer experience strategy and research consultancy, on social media and smartphone app customer service enquiries.
The report looked at how many Australian consumers had used these channels for customer service enquiries and why they'd used, or not used, them.
The figures are quite dim reading...
The study (of 520 participants) indicated that only 16% of Australian consumers have ever used social media for a customer service enquiry and less than one in 10 Australians had used this channel for customer service in the last three months. Gen Y ran 'hotter', with 29% having ever used social media for a customer service enquiry.
Why didn't people use social media for these enquiries? The survey broke down the reasons as follows (multiple reasons allowed):
The reasons for not using apps were similar to social media:
However, other research suggests that this may not exactly be the case.
Fifth Quadrant’s 2012 Customer Service Industry Market Report (with 120 business participants) found that 69% of Australian based organisations had implemented social media and 23% had implemented smartphone apps for customer service. This is a small sample, but still statistically significant.
In other words, while 69% of organisations will accept customer service enquiries via social media, only 16% of Australians have used this approach and while 23% accept these enquiries via smartphone apps, only 15% of Australians have used these channels.
So if organisations are offering these channels, why do so few Australians use them?
More of Fifth Quadrant's research offers a clue...
How many times should a customer have to contact an organisation to resolve a customer service issue?
Fifth Quadrant reports that the level of 'first contact resolution' (where a customer only needs to contact an organisation once to have their query resolved) is much lower for social media or smartphone app than for phone contacts.
This significantly increases the cost of the interaction to the organisation and the customer and reduces customer satisfaction.
So what's the issue? Poor organisational implementation of social media and app channels.
Fifth Quadrant's Director, Dr Wallace said,
So let's go back to the reasons again...
Want to learn more about the research and report?
See Dr Wallace's blog, Your call.
And here are some of the key findings from Fifth Quadrant’s 2012 Customer Service Industry Market Report (n=120):
Social Media:
The report looked at how many Australian consumers had used these channels for customer service enquiries and why they'd used, or not used, them.
The figures are quite dim reading...
The study (of 520 participants) indicated that only 16% of Australian consumers have ever used social media for a customer service enquiry and less than one in 10 Australians had used this channel for customer service in the last three months. Gen Y ran 'hotter', with 29% having ever used social media for a customer service enquiry.
Why didn't people use social media for these enquiries? The survey broke down the reasons as follows (multiple reasons allowed):
- 32% said it isn't personal,
- 30% said they did not know that they could,
- 30% said they were concerned with security issues,
- 22% said they thought it would take longer than a phone call, and
- 20% said they did not think it would be a good experience.
The reasons for not using apps were similar to social media:
- 41% said they did not know they could,
- 21% said they thought it would take longer than a phone call,
- 16% said they thought it would make the process slower to talk to a customer service representative,
- 15% said they did not think it would be a good experience, and
- 13% said that they did not think it would be easy to use.
However, other research suggests that this may not exactly be the case.
Fifth Quadrant’s 2012 Customer Service Industry Market Report (with 120 business participants) found that 69% of Australian based organisations had implemented social media and 23% had implemented smartphone apps for customer service. This is a small sample, but still statistically significant.
In other words, while 69% of organisations will accept customer service enquiries via social media, only 16% of Australians have used this approach and while 23% accept these enquiries via smartphone apps, only 15% of Australians have used these channels.
So if organisations are offering these channels, why do so few Australians use them?
More of Fifth Quadrant's research offers a clue...
How many times should a customer have to contact an organisation to resolve a customer service issue?
Fifth Quadrant reports that the level of 'first contact resolution' (where a customer only needs to contact an organisation once to have their query resolved) is much lower for social media or smartphone app than for phone contacts.
- Phone: 78% of queries handled in one contact
- Social media: 59%
- Smartphone app: 51%
This significantly increases the cost of the interaction to the organisation and the customer and reduces customer satisfaction.
So what's the issue? Poor organisational implementation of social media and app channels.
Fifth Quadrant's Director, Dr Wallace said,
“There is no question that social media and mobile channels will be important in the next few years as the percentage of consumers who use these channels for customer service doubles year on year. Rather, it is a question of how effectively organisations address the supporting business processes and skill levels of social media customer service representatives.
The challenge for Australian business is that they typically do not consider Multi-channel Customer Experience as a strategy, hence these new channels lack integration, they do not have accurate revenue and cost models and there is poor data analytics. This has resulted in a sub-optimal channel deployment and as the research shows, ultimately, a sub-optimal customer experience.”
So let's go back to the reasons again...
- There was an awareness issue (social media: 30%; apps: 41%).
Organisations need to integrate information about the ability to engage them through social media and apps in their promotion, packaging and engagement. - There was a speed/perceived speed issue (social media: 22% (take longer); apps: 21% (take longer) and 16% (slower)).
Organisations need to integrate these channels with their other customer contact points, building the protocols and processes to make it faster and easier to engage online than by phone. - There was an experience/usability issue (social media: 30% (not personal), 20% (experience); apps: 15% (experience) and 13% (easy to use)).
Organisations need to codesign their channels with customers, putting extensive work into the upfront experiential design to make them an easy to use service with a great user experience. The investment in design is more than offset by the long-term cost savings in moving people from high-cost phone to low cost online service channels. - There was a security issue (social media 30%).
Organisations need to take the same actions as ecommerce companies did to reduce this to a minimum, providing context, clear security measures and escalation and rectification mechanisms that assure users that they won't be disadvantaged by any security problems.
Overall, organisations need to run these channels as part of their customer service framework, not remotely via communication, marketing or IT teams.
See Dr Wallace's blog, Your call.
And here are some of the key findings from Fifth Quadrant’s 2012 Customer Service Industry Market Report (n=120):
Social Media:
- In Australia, the predominant share of the 22 million daily customer interactions handled by contact centres is still handled by live agents (52%). Despite industry increasing the implementation of social media as a customer service channel, Share of Contact Handling by Social Media channels is 0.2%
- Amongst organisations that offer social media as a channel for customer service, 67% report that the marketing department is responsible for managing it.
- 63% of organisations in the study have only had social media as a channel for customer service implemented for 1 to 2 years.
- Amongst organisations that currently have social media as a customer service channel only 29% reported their contact centre has the ability to escalate a social media query through to a customer support application that links through to an agent.
- Past three months usage of social media as a customer service channel has doubled in the past 12 months (4% 2011; 8% 2012).
- The proportion of consumers who believe they will be using social media more often in the future has also nearly doubled from 4% in 2011 to 7% in 2012.
- When asked whether they had received a response from an organisation via a Social Media network to comments they had made through Social Media, only 7% of consumers reported that they had. About 5% of consumers claim to have received essential information posted via a Social Media network. 14% of consumers report they have received information from an organisation via social media about new products and services.
- Amongst organisations that offer smartphone apps as a channel for customer service 50% report that the marketing department is responsible for managing it, with a further 33% reporting that IT is responsible.
- 50% have only had smartphone apps as a channel for customer service implemented for one to two years, with 33% reporting smartphone app has been available for less than 12 months.
- Amongst organisations that do not currently offer smartphone app as a channel for customer service, 25% report they have no plans to.
- Further to the existing 8% of consumers who have used a smartphone app for customer service, a further 33% of consumers report that they are likely to use a smartphone app for a customer service enquiry in the next 12 months.
- Amongst Gen Y consumers, 29% report that they will be using smartphone apps for customer service issues more often in the next 1-2 years. This is significantly higher compared to Baby Boomer (8%) and Silent (4%) generations.
Wednesday, November 14, 2012
How will augmented reality shape society's future and the expectations of government? | Tweet |
Augmented Reality, or AR, involves the projection of information onto our physical landscape through some form of assistive device, such as the heads-up displays (HUDs) used in many aircraft, the use of a mobile device with a camera to photograph a location and add information or the upcoming Google Goggles, which promise a wearable AR experience.
There's many, many potential uses for this approach.
Doctors could monitor a patient's vitals and view an x-ray or CAT image over the area they are operating on, emergency workers could see a map of a building's interior, which tells them where to go to get around obstacles or even where people are trapped, business people and politicians could access public details of individuals they meet so they're never short of a name or small talk, street workers could view all the conduits under a road, or builders the wires and pipes in walls and floors in order to guide their activities.
Even tourists could use AR productively, viewing historical information on landmarks and tour routes as they travel around a city or country.
The potential for global information at one's eyeballs may even be a more profound leap forward than the internet's now established concept of global information at one's fingertips.
This isn't even new technology. Our grandparents were the first to have access to augmented reality devices, before computers, microwave ovens or mobile phones, albeit in a limited way.
The first HUD was invented in 1937, when the German air force developing the reflector sight, an approach that used mirrors to reflect a gunsight modified by airspeed and turn rate onto the glass in front of a fighter pilot's eyes. This improved their accuracy and effectiveness in air combat and began a race by other nations to develop similar approaches.
However the first electronic HUD wasn't created until the mid 1950s, when the British introduced the Blackburn Buccaneer, a low-flying bomber with the world's first inbuilt HUD. While the prototype flew in 1958, the production aircraft didn't enter service until 1968 and served until 1994, used as late as in the Gulf War.
It was noticed that the HUD improved the general abilities of pilots, despite being originally for targeting purposes only, and it was expanded to provide a range of additional information to help pilots.
The modern HUD was developed by 1975, by a French test pilot, featuring a standard interface to aid pilots switching between planes. Around the same time HUDs were first expanded into use on civilian planes and in 1988 the Oldsmobile Cutlass Supreme became the production car to feature a HUD, followed around ten years later by the first motorcycle helmet offering a heads-up display.
There's many, many potential uses for this approach.
Doctors could monitor a patient's vitals and view an x-ray or CAT image over the area they are operating on, emergency workers could see a map of a building's interior, which tells them where to go to get around obstacles or even where people are trapped, business people and politicians could access public details of individuals they meet so they're never short of a name or small talk, street workers could view all the conduits under a road, or builders the wires and pipes in walls and floors in order to guide their activities.
Even tourists could use AR productively, viewing historical information on landmarks and tour routes as they travel around a city or country.
The potential for global information at one's eyeballs may even be a more profound leap forward than the internet's now established concept of global information at one's fingertips.
This isn't even new technology. Our grandparents were the first to have access to augmented reality devices, before computers, microwave ovens or mobile phones, albeit in a limited way.
The first HUD was invented in 1937, when the German air force developing the reflector sight, an approach that used mirrors to reflect a gunsight modified by airspeed and turn rate onto the glass in front of a fighter pilot's eyes. This improved their accuracy and effectiveness in air combat and began a race by other nations to develop similar approaches.
However the first electronic HUD wasn't created until the mid 1950s, when the British introduced the Blackburn Buccaneer, a low-flying bomber with the world's first inbuilt HUD. While the prototype flew in 1958, the production aircraft didn't enter service until 1968 and served until 1994, used as late as in the Gulf War.
It was noticed that the HUD improved the general abilities of pilots, despite being originally for targeting purposes only, and it was expanded to provide a range of additional information to help pilots.
The modern HUD was developed by 1975, by a French test pilot, featuring a standard interface to aid pilots switching between planes. Around the same time HUDs were first expanded into use on civilian planes and in 1988 the Oldsmobile Cutlass Supreme became the production car to feature a HUD, followed around ten years later by the first motorcycle helmet offering a heads-up display.
Experimental HUDs have been developed for ski goggles, scuba divers, personal battle armour and for fire fighter goggles as well as many other applications, with some of these very close to production ready.
Augmented reality is being integrated into computer, console or mobile games, many of which feature some form of virtual HUD. Our televisions display information on the screen about programs and channels and our mobile devices, with the right apps, can use their cameras to place additional information on real-time video.
With the range of uses for the augmented reality supported by these devices and the widespread exposure society has now had to the concept, the next step will be very interesting.
Once appropriate mobile augmented reality devices comes onto the market, such as the product Google is working on, there will be a market ready to adopt them.
How will they be used in society? What policy challenges will they create?
A group of Israeli film makers has produced a seven-minute long short-film, Sight, which showcases some of the potential uses of augmented reality and some of the challenges and risks that societies may have to face.
Tags:
gov2au,
movie,
virtual worlds
Subscribe to:
Posts (Atom)