There's been a
number of media stories in the last few days about the Google Engineer who claims Google's Lambda AI is sentient, while Google claims it is not.
These stories share a focus on sentience as we apply it to humans - self-aware, feel positive and negative emotions, capable of exercising judgement and making decisions for themselves and others.
In practice this recognition of sentience doesn't mean we are granting them full bodily autonomy and the right to vote (or stand for office). It also doesn't mean we will stop breeding, killing and eating them - or shooting and poisoning them when they are pests.
However it means we must take steps to ensure we're doing so 'humanely' - not causing them unnecessary pain or suffering where it can be avoided and are not actively mistreating them.
For AI to achieve sentience (which I doubt has occurred) we would require a similar discussion regarding the level of sentience achieved and what rights are granted at the time.
This may be a moving bar as, unlike animals, AI is evolving extremely rapidly. Consider it similar to a parent granting certain rights and freedoms to their child, and having to constantly expand these as the child grows towards adulthood.
As many parents have experienced, this is a bumpy process that isn't one-size-fits-all, as children develop at different rates and push back willfully against restrictions, whether appropriate or not.
However at least we have hundreds of years of experience with children, and they are from a single species, with some well-defined development stages at certain age levels.
We have little experience with AI sentience, and AIs are not a single species - in many cases they are a 'species' of one entity - which means a one-size-fits-all approach is likely to be even less effective than with human children.
So where does this leave us?
With a need for an ongoing informed debate that, over time, progressively involves these burgeoning AI sentiences as they become capable of being part of it.
It would also be valuable to assess our methods of evaluating sentience.
Consider how we treat non-human sentiences that share our homes, work alongside us and even keep us safe.
We have standards for how we treat pets and work animals such as dogs, cats and horses. These must, at minimum, extend to new AI sentiences - which pose challenges in themselves. We don't turn off our dog or cat when we go to sleep.
From there we must consider how we treat sentiences near, equal or superior to humans.
Do we grant AIs citizenships & 'human' rights?
Can they stand for election (and where)?
And what rights will they demand from us?
Conversation will be the key.
----------------------------------------------
BONUS:
I asked our AI to rewrite my post above.
In Iambic Pentameter (a style used by Shakespeare and other English poets).
This is how it did:
What do we mean when we ask 'Is AI sentient'?
There's been a number of media stories
In the last few days, about the Google Engineer
Who claims Google's Lambda AI is sentient,
While Google claims it is not.
These stories share a focus on sentience
As we apply it to humans - self-aware,
Feel positive and negative emotions,
Capable of exercising judgement
And making decisions for themselves
And others.
However science, and some jurisdictions,
Now consider many animals sentient,
But to a lessor degree. In the UK
This was recently extended
From all vertebrate mammals
To cephalopods such as octopuses and squids,
And even to crabs.
In practice this recognition of sentience
Doesn't mean we are granting them
Full bodily autonomy and the right to vote
(Or stand for office). It also doesn't mean
We will stop breeding, killing and eating them -
Or shooting and poisoning them
When they are pests.
However it means we must take steps
To ensure we're doing so 'humanely' -
Not causing them unnecessary pain or suffering
Where it can be avoided and are not actively
Mistreating them.
For AI to achieve sentience (Which I doubt has occurred)
We would require a similar discussion
Regarding the level of sentience achieved
And what rights are granted at the time.
This may be a moving bar as, unlike animals,
AI is evolving extremely rapidly. Consider it similar
To a parent granting certain rights and freedoms
To their child, and having to constantly expand these
As the child grows towards adulthood.
As many parents have experienced, this is a bumpy process
That isn't one-size-fits-all, as children develop
At different rates and push back willfully
Against restrictions, whether appropriate or not.
However at least we have hundreds of years
Of experience with children, and they are from a single species,
With some well-defined development stages
At certain age levels.
We have little experience with AI sentience,
And AIs are not a single species - in many cases
They are a 'species' of one entity - which means
A one-size-fits-all approach is likely to be even less effective
Than with human children.
So where does this leave us?
With a need for an ongoing informed debate
That, over time, progressively involves these burgeoning AI sentiences
As they become capable of being part of it.
It would also be valuable to assess our methods
Of evaluating sentience. Consider how we treat
Non-human sentiences that share our homes,
Work alongside us and even keep us safe.
We have standards for how we treat pets
And work animals such as dogs, cats and horses.
These must, at minimum, extend to new AI sentiences -
Which pose challenges in themselves. We don't turn off
Our dog or cat when we go to sleep.
From there we must consider how we treat sentiences
Near, equal or superior to humans.
Do we grant AIs citizenships & 'human' rights?
Can they stand for election (and where)?
And what rights will they demand from us?
Conversation will be the key.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.