Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts

Thursday, March 30, 2023

Is Artificial Intelligence safe? It depends who you ask...

 Over the last few days there's been some significant calls for a slowdown or pause in artificial intelligence research and development, or at least for a pause on public implementations of AI.

There's some significant rationale behind this - with the arrival of GPT4 (which I've been extensively experimenting with) we have seen another huge leap in AI capability.

We've also seen many large companies announce they are working to integrate this level of AI into their services in ways that allow the public to use (or misuse) this capability.

Some of this is extremely valuable - such as integrating a writing, formula and presentation assistant into Microsoft's Suite of tools. Some appears risky - such as Snapchat's release of an AI 'friend' into its paid service in February which, as the video, The AI Dilemma (linked), can incidentally be used to help sexual predators groom children (watch from 47 min - 49 min for this specific segment).

Also we've seen over a thousand AI luminaries and researchers call for a pause on AIs more sophisticated than GPT4 (letter here, article about it here) - which has received particular attention because Elon Musk signed, but is actually notable because of calibre and breadth of other industry experts and AI company CEOs that signed it.

Now whether government is extensively using AI or not, AI is now having significant impacts on society, These will only increase - and extremely rapidly.

Examples like using the Snapchat AI for grooming are the tip of the iceberg. It is now possible - with 3 seconds of audio (a Microsoft system) - to create a filter mimicking any voice, making all voice recognition security systems useless. 

In fact there's already been several cases where criminals are calling individuals to get their phone message (in their voice) or their initial greeting, then using that to voice authenticate the individual's accounts and steal funds.

Now this specific example isn't new - the first high profile case occurred in 2019

However the threshold for accessing and using this type of technology has dramatically come down, making it accessible to almost anyone.

And this is only one scenario. Deep fakes can also mimic appearances, including in video, and AIs can also be used to simulate official documents or conversations with organisations to phish people.

That's alongside CV fakery, using AI to cheat on tests (in schools, universities and the workplace) and the notion of outsourcing your job to AI secretly, which may expose commercially sensitive information to external entities.

And we haven't even gotten to the risks of AI that, in pursuit of its goal or reward, uses means such as replicating itself, breaking laws or coercing humans to support it.

For governments this is an accelerating potential disaster and needs to have the full attention of key teams to ensure they are designing systems that cannot be exploited by humans asking an AI to read the code for a system and identify any potential vulnerabilities.

Equally the need to inform and protect citizens is becoming critical - as the Snapchat example demonstrates.

With all this, I remain an AI optimist. AI offers enormous benefits for humanity when used effectively. However with the proliferation of AI, to the extent that it is now possible to run a GPT3 level AI on a laptop (using the Alpaca research model), there needs to be proactivity in the government's approach to artificial intelligence.


Read full post...

Bookmark and Share