Friday, March 31, 2023

An example of GPT4 at work

Given that many folks may be observing but not actually using AIs such as GPT3 and GPT4 as yet, I wanted to give a quick example of how these AIs perform in practice.

Here's an example where I asked an AI to write a treatise on the Westworld notion that a human consciousness only requires 10,247 lines of code.

My prompt:

Hi ChatGPT. In the TV series Westworld, it is suggested that a human consciousness can be defined in 10,247 lines of code. Please write a treatise on why this would be the case and a process that a researcher or psychologist might use to understand and produce this code.

GPT4 response:

Title: The Quest for Human Consciousness: Decoding the 10,247 Lines of Code

Abstract:

The television series Westworld presents the idea that human consciousness can be distilled down to 10,247 lines of code. While this is a fictional concept, it raises intriguing questions about the nature of human consciousness and how it might be replicated or understood through computational means. In this treatise, we explore the theoretical underpinnings of this concept, discuss the possible reasons for this specific number of code lines, and outline a potential process for a researcher or psychologist to decipher and generate this code.

Introduction

The concept of condensing human consciousness into 10,247 lines of code is a fascinating idea that requires an understanding of the human mind's complexity and the computational approaches used to mimic it. This treatise aims to explore this notion, touching upon the interdisciplinary fields of cognitive science, psychology, neuroscience, and artificial intelligence.

Theoretical Underpinnings

The idea that human consciousness can be expressed in a specific number of code lines is built on several assumptions:

a. Reductionism: This perspective argues that complex phenomena, such as consciousness, can be reduced to simpler, more fundamental components. In this case, it implies that human consciousness can be explained and replicated by breaking it down into discrete elements represented by lines of code.

b. Computational Theory of Mind: This theory posits that the human mind operates like an information processor, with mental processes akin to computational algorithms. If human consciousness adheres to this model, it might be possible to represent it as a set of algorithms or code lines.

c. Artificial Intelligence: The rapid advancements in AI research and neural networks support the idea that replicating human-like thinking, learning, and problem-solving abilities is possible through computational means.

Why 10,247 Lines of Code?

The specific number, 10,247, is likely an arbitrary choice made by the creators of Westworld to illustrate the complexity and uniqueness of human consciousness. Nonetheless, the number serves as a captivating starting point for discussions about the feasibility of representing consciousness in code.

A Process for Decoding and Producing the Consciousness Code

To understand and generate the 10,247 lines of code representing human consciousness, a researcher or psychologist might follow this process:

a. Mapping Mental Processes: Start by identifying the key cognitive processes and mental states that contribute to human consciousness. This could involve categorizing various cognitive functions, emotions, and subjective experiences.

b. Investigating Neural Correlates: Examine the brain regions and neural networks responsible for the identified cognitive processes, using techniques such as fMRI, EEG, and MEG.

c. Developing Computational Models: Develop algorithms and computational models that replicate the observed cognitive processes and neural activities. This could involve utilizing artificial neural networks, machine learning, and other AI techniques.

d. Code Generation: Translate the developed computational models into code, keeping track of the number of lines used. Iterate and optimize the code to reduce its length without sacrificing its accuracy in replicating human consciousness.

e. Verification and Validation: Test the generated code by comparing its outputs with empirical data collected from human subjects, adjusting and refining the code as needed.

Conclusion

While the notion of human consciousness being distilled down to 10,247 lines of code is a fictional idea, it sparks fascinating discussions about the nature of consciousness and the potential for replicating it through computational means. By exploring interdisciplinary research in cognitive science, psychology, neuroscience, and artificial intelligence, we can attempt to unravel the mysteries of the human mind and push the boundaries of what is possible in understanding and emulating consciousness.


My follow-on prompt:

So as a neuroscientist, mathematician and psychology, using the treatise above, write the actual code for a human consciousness.

GPT4 response:

As an AI language model, I cannot provide you with the actual code for human consciousness, since it is a highly complex and currently unsolved problem in neuroscience, psychology, and artificial intelligence. The idea of condensing human consciousness into a specific number of lines of code, as presented in the TV series Westworld, is purely fictional.

Moreover, creating an actual code for human consciousness would require a deep understanding of the human brain, which we have not yet achieved. It would also necessitate advancements in artificial intelligence, neural networks, and cognitive modeling that are currently beyond our reach.

However, I can outline some of the components that would potentially be part of the code, based on the current understanding of human cognition and artificial intelligence research:

Sensory input processing: Code to process and interpret inputs from various sensory modalities such as vision, audition, touch, taste, and smell.

Memory systems: Code to implement short-term and long-term memory systems, enabling the storage and retrieval of information.

Attention mechanisms: Code to allocate cognitive resources to relevant stimuli and filter out irrelevant information.

Emotion processing: Code to simulate emotional states and their influence on cognition, decision-making, and behavior.

Language processing: Code to comprehend and generate natural language, enabling communication.

Decision-making and problem-solving: Code to evaluate options and make choices based on goals, constraints, and available information.

Learning mechanisms: Code to adapt and improve cognitive processes through experience, including reinforcement learning, supervised learning, and unsupervised learning.

Self-awareness and introspection: Code to model self-awareness and the ability to reflect on one's own mental states.

Social cognition: Code to understand and interpret the mental states, intentions, and emotions of others, enabling social interaction.

Conscious experience: Code to simulate subjective experiences and the phenomenological aspects of consciousness.

It is important to note that these components are only a rough outline of what might be involved in replicating human consciousness, and the actual implementation would be far more complex and intricate. Developing a comprehensive code for human consciousness remains a formidable challenge that will likely require many more years of research and collaboration across multiple scientific disciplines.

Read full post...

Thursday, March 30, 2023

Is Artificial Intelligence safe? It depends who you ask...

 Over the last few days there's been some significant calls for a slowdown or pause in artificial intelligence research and development, or at least for a pause on public implementations of AI.

There's some significant rationale behind this - with the arrival of GPT4 (which I've been extensively experimenting with) we have seen another huge leap in AI capability.

We've also seen many large companies announce they are working to integrate this level of AI into their services in ways that allow the public to use (or misuse) this capability.

Some of this is extremely valuable - such as integrating a writing, formula and presentation assistant into Microsoft's Suite of tools. Some appears risky - such as Snapchat's release of an AI 'friend' into its paid service in February which, as the video, The AI Dilemma (linked), can incidentally be used to help sexual predators groom children (watch from 47 min - 49 min for this specific segment).

Also we've seen over a thousand AI luminaries and researchers call for a pause on AIs more sophisticated than GPT4 (letter here, article about it here) - which has received particular attention because Elon Musk signed, but is actually notable because of calibre and breadth of other industry experts and AI company CEOs that signed it.

Now whether government is extensively using AI or not, AI is now having significant impacts on society, These will only increase - and extremely rapidly.

Examples like using the Snapchat AI for grooming are the tip of the iceberg. It is now possible - with 3 seconds of audio (a Microsoft system) - to create a filter mimicking any voice, making all voice recognition security systems useless. 

In fact there's already been several cases where criminals are calling individuals to get their phone message (in their voice) or their initial greeting, then using that to voice authenticate the individual's accounts and steal funds.

Now this specific example isn't new - the first high profile case occurred in 2019

However the threshold for accessing and using this type of technology has dramatically come down, making it accessible to almost anyone.

And this is only one scenario. Deep fakes can also mimic appearances, including in video, and AIs can also be used to simulate official documents or conversations with organisations to phish people.

That's alongside CV fakery, using AI to cheat on tests (in schools, universities and the workplace) and the notion of outsourcing your job to AI secretly, which may expose commercially sensitive information to external entities.

And we haven't even gotten to the risks of AI that, in pursuit of its goal or reward, uses means such as replicating itself, breaking laws or coercing humans to support it.

For governments this is an accelerating potential disaster and needs to have the full attention of key teams to ensure they are designing systems that cannot be exploited by humans asking an AI to read the code for a system and identify any potential vulnerabilities.

Equally the need to inform and protect citizens is becoming critical - as the Snapchat example demonstrates.

With all this, I remain an AI optimist. AI offers enormous benefits for humanity when used effectively. However with the proliferation of AI, to the extent that it is now possible to run a GPT3 level AI on a laptop (using the Alpaca research model), there needs to be proactivity in the government's approach to artificial intelligence.


Read full post...

Monday, March 06, 2023

Artificial Intelligence isn't the silver bullet for bias. We have to keep working on ourselves.

There's been a lot of attention paid to AI ethics over the last few years due to concerns that use of artificial intelligence may further entrench and amplify the impact of subconscious and conscious biases.

This is very warranted. Much of the data humans have collected over the last few hundred years is heavily impacted by bias. 

For example, air-conditioning temperatures are largely set based on research conducted in the 1950s-70s in the US, on offices predominantly inhabited by men and folks wearing heavier materials than worn today. It's common for many folks today to feel cold in offices where air-conditioning is still set for men wearing three-piece suits.

Similarly, many datasets used to teach machine learning AI suffer from biases - whether based on gender, race, age or even cultural norms at the time of collection. We only have the data we have from the last century and it is virtually impossible for most of it to be 'retrofitted' to remove bias.

This affects everything from medical to management research, and when used to train AI the biases in the data can easily affect the AI's capabilities. For example, the incredibly awkward period just a few years ago when Google's image AI incorrectly identified black people as 'gorillas'. 

How did Google solve this? By preventing Google Photos from labelling any image as a gorilla, chimpanzee, or monkey – even pictures of the primates themselves - an expedient but poor solution, as it didn't fix the bias.

So clearly there's need for us to carefully screen the data we use to train AI to minimise the introduction or exacerbation of bias. And there's also need to add 'protective measures' on AI outputs, to catch instances of bias, both to exclude them from outputs and to use them to identify remaining bias to address.

However, none of this work will be effective if we don't continue to work on ourselves.

The root of all AI bias is human bias. 

Even when we catch the obvious data biases and take care when training an AI to minimise potential biases, it's likely to be extremely difficult, if not impossible, to eliminate all bias altogether. In fact, some systemic unconscious biases in society may not even be visible until we see an AI emulating and amplifying them.

As such no organisation should ever rely on AI to reduce or eliminate the bias exhibited by its human staff, contractors and partners. We need to continue to work on ourselves to eliminate the biases we introduce into data (via biases in the queries, process and participants) and that we exhibit in our own language, behaviours and intent.

Otherwise, even if we do miraculously train AIs to be entirely bias free, bias will get reintroduced through how humans selectively employ and apply the outputs and decisions of these AIs - sometimes in the belief that they, as humans, are acting without bias.

So if your organisation is considering introducing AI to reduce bias in a given process or decision, make sure you continue working on all the humans that remain involved at any step. Because AI will never be a silver bullet for ending bias while we, as humans, continue to harbour them.

Read full post...

Wednesday, March 01, 2023

Does Australia have the national compute capability for widespread local AI use?

There's been a lot of attention on the potential benefits and risks of artificial intelligence for Australia - with the Department of Industry developing the Artificial Intelligence (AI) Ethics Framework and the DTA working on formal AI guidelines.

However comparative less attention is often placed on building our domestic AI compute capability - the local hardware required to operate AIs at scale.

The OECD also sees this as a significant issue - and has released the report 'A blueprint for building national compute capacity for artificial intelligence' specifically to help countries that lack sufficient national compute capability.

As an AI startup in Australia, we're been heavily reliant on leveraging commercial AI capabilities out of the US and Europe. This is because there's no local providers of generative AI at commercial prices. We did explore building our own local capability and found that the cost for physical hardware and infrastructure was approximately 10x the cost of the same configurations overseas.

There's been global shortages of the hardware required for large AI models for a number of years. Unfortunately, Australia tends to be at the far end of these supply chains, with even large commercial cloud vendors unable to provision the necessary equipment locally.

As such, while we've trained several large AI models ourselves, we've not been able to locate the hardware or make a commercial case for paying the costs of hosting them in Australia.

For some AI uses an offshore AI is perfectly acceptable, whether provided through a finetuned commercial service or custom-trained using an open-source model. However, there's also many other use cases, particularly in government with jurisdictional security requirements, where a locally trained and hosted AI is mandated.

A smaller AI model, such as Stable Diffusion, can run on a laptop, however larger AI models require significant dedicated resources, even in a cloud environment. Presently few organisations have the capability to pay the costs for sourcing the hardware and capability to run this within Australian jurisdictions.

And that's without considering the challenge in locating sufficient trained human resources to design, implement and manage such a service.

This is an engineering and production challenge, and will likely be resolved over time. However with the high speed of AI evolution, it is a significant structural disadvantage for Australian organisations that require locally hosted AI solutions.

If Australia's key services have to rely on AI technologies that are several generations behind world-standards, this will materially impact our global competitiveness and capability to respond.

As such, alongside worrying about the ethics and approaches for using AI, Australian governments should also reflect on what is needed to ensure that Australia has an evolving 'right size' national compute capability to support our AI requirements into the future.

Because this need is only going to grow.

Read full post...

Friday, February 24, 2023

To observe future use of AI, watch Ukraine

I'm not given to regularly commenting on global events, such as major wars, through this blog. However, given the active use of advanced technology by both Ukraine and Russia in the current war, I am making a brief exception to comment on the use of advanced technologies, including AI.

The Russian invasion of Ukraine has seen some of the most advanced uses of technology on the battlefield in history. Some of this involves AI, some other technologies, but all of it demonstrates the practical potential of these technologies and should be considered within any nation's future defense planning.

Now drones have been used in previous conflicts in Afghanistan and elsewhere, with militia forces modifying off-the-shelf products into aerial scouts and IEDs. Meanwhile US forces have used custom-built human-controlled Predators for over ten years to pinpoint target enemy commanders and units.

However, the war in Ukraine represents the largest deployment and the most versatile use of drones to-date by regular armies opposing each other in the field.

For example, both Ukraine and Russia have successfully used autonomous watercraft in assaults. These unmanned above and below water drones have been used to inflict damage on opposing manned vessels and plant and clear mines at a fraction of the cost of building, crewing and maintaining manned vessels.

The Ukrainian attack on Russia's Black Sea Fleet in port exhibits signs of being one of the first uses of a drone swarm in combat, with a group of kamikaze above and below water drones, potentially with aerial drone support, used in concert to damage several Russian ships.

While perhaps not as effective as was hoped, this is a prototype for future naval drone use and heralds that we are entering an era where swarms of relatively cheap and disposable drones - partially or completely autonomous to prevent signal blocking or hijacking - are used alongside or instead of manned naval vessels to inflict damage and deter an attacker, acting as a force magnifier.

The use of autonomous mine layers offers the potential for 'lay and forget' minefields astride enemy naval routes and ports, to again limit and deter enemy naval movements. Again the lower cost and disposability of these drones, compared to training explosive handling human divers and building, maintaining, defending and crewing manned naval minelaying and removal vessels, makes them a desirable alternative.

We've also seen extensive use of aerial drones by both sides in the conflict to spot and target enemy combatants, allowing more targeted use of artillery and manpower and helping reduce casualties by lifting aspects of the fog of war. Knowing the numbers and strength of your opponent on the battlefield greatly enhances a unit's capability to successfully defend or assault a position.

While many of these drones are human controlled, there's been use of AI for situations where direct control is blocked by enemy jamming or hacking attempts. AI has also been used for 'patrol routes' and to 'return' home once a drone has completed a mission - complete with ensuring that drones fly an elusive course to conceal the location of their human controllers.

The Ukrainian war has even seen the first public video recorded aerial drone on drone conflict, with a Ukrainian drone ramming a Russian drone to knock it out of the sky and remove the enemy's best source of intelligence.

These examples of drone use aren't totally new to warfare. 

Hundreds of years ago unmanned fire ships were used in naval combat - loaded with explosives and left to drift into enemy warships and explode. 

Similarly, early aerial combat by humans, in World War One, saw pilots take pistols and bricks aloft to fire at enemy planes or drop on enemy troops below. Just as drones today are being modified to carry grenades to drop on infantry positions or used to ram opposing drones.

The war will help advance the technology and improve the tactics. If nothing else Ukraine is a test ground for learning how to effectively use drones within combined forces to improve overall military effectiveness and reduce casualties.

And artificial intelligence is becoming increasingly important as a control alternative when an enemy blocks signal or attempts to take control of an army's drone assets.

We need to put these learnings to use in our own military planning and acquisitions, so that Australia's military becomes capable of fighting the next war, rather than the last.

Read full post...

Bookmark and Share