Friday, November 22, 2024

Submission regarding under 16 access to social media networks

I’m not terribly sympathetic to the Australian Government’s attempt to ban social media access to under 16s.

There’s many reasons why a blanket approach is bad - from removing parental control to removing support networks from at risk and diverse teens.

However if they’re going to propose it, at least they should have a solid plan on how to implement it effectively in a way that might do some good. Which the current bill doesn’t really offer.

So I worked this morning with ChatGPT to write submission addressing the failings of the amendment bill - and this would be my submission if there was a way to make a submission (which there is not).


Submission to Online Safety Consultation 

By Craig Thomler

Subject: Response to Proposed Social Media Age Restrictions under the Online Safety Amendment (Social Media Minimum Age) Bill 2024

Introduction


The Online Safety Amendment (Social Media Minimum Age) Bill 2024 is a step in the right direction but risks achieving more harm than good in its current form. While its intention to protect children online is commendable, the proposed measures are overly simplistic and disproportionately punitive, failing to address the nuances of a complex digital landscape.

Blanket restrictions won’t stop harm; they’ll push it into unregulated spaces. Small platforms face extinction under these penalties, while big tech barely flinches. The amendment sacrifices privacy in the name of safety, creating new risks instead of reducing them.

To achieve its goals, the amendment must shift from broad-brush penalties to precision policy by integrating smarter, fairer solutions that work for all stakeholders. Through tailored compliance, strengthened privacy protections, and parental involvement, we can build a safer digital environment without alienating users or stifling innovation.

Harms Caused by the Proposed Amendment


1. Negative Impacts on Smaller Platforms
  - Disproportionate Burden: The flat penalties of 30,000 penalty units fail to consider the operational and financial capacity of small-to-medium-sized platforms. These businesses often lack the resources to implement complex age-verification systems, potentially forcing them out of the market.
  - Stifling Innovation: Smaller platforms, many of which cater to niche communities or serve educational purposes, may cease operations due to compliance costs and risks of high penalties.

2. Insufficient Privacy Safeguards

  - Data Mismanagement Risks: Requiring platforms to collect and store sensitive age-verification data increases the risk of breaches, identity theft, or misuse by malicious actors. The amendment does not include clear guidelines on data minimisation or secure destruction.

  - Intrusiveness: Intrusive mechanisms, such as uploading government IDs, can deter users and create additional risks if such data is mishandled.

3. Inadequate Addressing of Online Harms
  - Overfocus on Age Restrictions: By limiting access based solely on age, the amendment fails to address broader issues such as exposure to harmful content, algorithmic manipulation, and the role of content moderation.
  - Circumvention Risks: Children can easily bypass age restrictions by creating fake accounts or using anonymisation tools (e.g., VPNs), undermining the efficacy of the measures.

4. Alienation and Inequity
  - Exclusion from Social Development: Adolescents (13–16 years) rely heavily on social media for peer interaction, cultural participation, and identity development. Blanket restrictions risk isolating them from their social circles, causing feelings of exclusion and rebellion.
  - Reduced Parental Autonomy: The amendment removes discretion from parents and guardians, who are better positioned to decide when and how their children engage with social media.

5. Unfair Application of Penalties
  - Ineffectiveness for Larger Platforms: For major platforms (e.g., Meta, TikTok), the flat penalty is negligible compared to their revenue. This reduces the amendment’s deterrent effect and gives larger platforms a competitive advantage over smaller networks.

Proposed Improvements


1. Revised Penalty Structure

  Introduce a sliding-scale penalty system based on platform revenue, user base, and level of non-compliance:

  - Platforms with annual global revenue under $5 million AUD: Maximum fine of 1,000 penalty units.
  - Platforms with annual global revenue between $5 million and $100 million AUD:
    - Base penalty: 2% of Australian revenue.
    - Additional penalties for repeated breaches: Public accountability measures (e.g., public notices or restrictions on ad targeting in Australia).
  - Platforms with annual global revenue exceeding $100 million AUD:
    - Base penalty: 5% of Australian revenue or $10 million AUD, whichever is greater.
    - Additional penalties for repeated breaches:
      - Public reporting of non-compliance on government and platform websites.
      - Platform-wide visibility restrictions in Australia (e.g., throttling algorithmic content recommendations for non-compliant platforms).

2. Verified Parent/Guardian Discretion
  Introduce a Parental Consent Framework to allow parents/guardians to grant access to age-restricted platforms for children under 16:
  - Consent must be verified through secure processes, such as:
    - Linking to a verified parental account.
    - Submitting proof of guardianship alongside consent forms.
  - Platforms must enable granular parental controls, allowing guardians to monitor and manage their child’s activity.
  - Require platforms to provide education resources for parents on managing online safety risks.

3. Government-Endorsed Digital Age Verification System (DAVS)
  - Develop a Digital Age Verification System (DAVS) that:
    - Verifies user ages anonymously through tokens or hashed data.
    - Provides a standardised, privacy-focused solution for all platforms, reducing compliance costs.
  - DAVS integration to be mandatory for large platforms, while optional (but subsidised) for smaller ones.

4. Strengthened Privacy Protections
  - Mandate data minimisation practices:
    - Platforms must only collect data strictly necessary for age verification.
    - All verification data must be encrypted and destroyed within 14 days of account approval.
  - Introduce regular privacy audits overseen by the Office of the Australian Information Commissioner (OAIC), with public reporting of results.
  - Prohibit platforms from using age-verification data for advertising or algorithmic purposes.

5. Non-Monetary Penalties
  - Public Notices of Non-Compliance: Published on the eSafety Commissioner’s website and the platform’s Australian homepage.
  - Operational Restrictions: For repeated breaches, restrict platform visibility (e.g., content distribution) until compliance is demonstrated.
  - Mandatory User Notifications: Platforms must notify all Australian users about their non-compliance and remediation steps.

6. Grant Support for Smaller Platforms
  - Establish a Small Platform Support Fund to assist platforms with annual global revenue under $5 million AUD in adopting compliant age-verification systems:
    - Grants covering up to 80% of compliance costs (e.g., DAVS integration or equivalent).
    - Eligibility contingent on adherence to a simplified compliance framework.

7. Education and Awareness Campaigns
  - Fund national education initiatives to promote:
    - Digital literacy for children and parents.
    - Awareness of online safety resources and the amendment’s goals.
  - Collaborate with schools to integrate digital safety into curricula.

Conclusion


The Online Safety Amendment (Social Media Minimum Age) Bill 2024, while well-intentioned, risks significant harm if implemented in its current form. The proposed measures disproportionately impact smaller platforms, fail to adequately protect privacy, and overlook the broader risks of online harm and exclusion.

To create an effective and equitable framework, I strongly urge the adoption of the proposed improvements in this submission. These include proportional penalties, a government-endorsed age verification system, strengthened privacy protections, and parental involvement. By addressing these gaps, we can achieve a balanced policy that protects vulnerable users without stifling innovation or overburdening businesses.

I look forward to seeing a revised and refined framework that aligns with these principles.

 Thank you for considering this submission.



Read full post...

Wednesday, April 05, 2023

What does AI mean for entrants into the job market?

Last week my youngest daughter graduated from university with an IT-related degree (with 2nd degree honours - she did better than me!)

At her graduation ceremony, the occasional speaker, who was a public and not-for-profit sector CTO, congratulated graduates for being some of the last to graduate without the support and integration of large language model AIs in their degrees. She also commented that her speech was partially written by ChatGPT.

While I've used generative AIs in my work and personal life for more than two years, these comments really struck home for me how integrated these AIs will become in everyone's lives.

No-one being educated today should be denied access to AI support to help them to grasp concepts and elevate their work to the highest possible standard.

No-one working in an office should be denied access to support AI to help them perform at their best every day.

Organisations that seek to cut off access to these AIs, because it's somehow 'cheating' - should note how calculators used to be seen as tools for cheating by teachers, and some workplaces resisted the introduction of computers because executives and managers didn't type themselves - they dictated to typing pools.

Indeed I remember, in the last 20 years, working for a Secretary in a very large Federal government department who did not have a computer in her office. She had her EA print all her emails and responded via dictation, which her EA typed up. To present a new look website to her, we had to print dozens of pages and she'd point to a button and ask what happened if the user clicked it.

This doesn't cut the mustard in the AI Age we've now entered. We need to give our students and workers access to the most modern tools to help them perform at peak.

So when I saw the following tweet, I had to ask GPT4 for a response, using her tweet verbatim as the prompt.


This is what GPT4 generated:

As an AI language model, I can understand why some members of Gen Z might be concerned about the impact of AI on their careers and future prospects. However, I believe that AI is not necessarily a threat but can be an opportunity if approached in the right way. Here are a few tips that might help:

  1. Develop skills that are complementary to AI: Instead of competing with AI, focus on developing skills that complement it. For example, skills that require human judgment, empathy, and creativity are still in demand and will continue to be valuable even as AI advances.
  2. Stay curious and keep learning: AI is still a relatively new field, and there is a lot to learn. Keep up with the latest trends and developments, take online courses or attend workshops to learn new skills, and stay curious about how AI is being applied in different industries.
  3. Collaborate with AI: Rather than seeing AI as a competitor, try to work alongside it. AI can be a powerful tool for automating tasks and making processes more efficient, so be open to collaborating with AI in your work.
  4. Embrace change and be adaptable: The job market is constantly changing, and the skills that are in demand today may not be in demand tomorrow. Be prepared to adapt to new technologies and trends, and don't be afraid to take on new challenges and learn new skills.
  5. Develop a growth mindset: Finally, cultivate a growth mindset and embrace the idea that learning and development are ongoing processes. See challenges as opportunities for growth and be open to new experiences and ideas.

Overall, it's important to remember that AI is still in its early stages, and it's difficult to predict exactly how it will impact the job market. However, by staying curious, learning new skills, and embracing change, you can position yourself to thrive in the new world of work.


Now Jenny felt this was a trifle impersonal to use without a human edit - which is fair enough. 

AI shouldn't be used to automate engagement, but to help streamline it. There's always going to be a role for humans to review and adjust messages to match the audience, using the context a human has that an AI does not.

However, it is possible to get AI to add personality - you simply have to tell it what personality you want it to add.

So I generated three further versions...

Using the voice of Albert Einstein:

Intelligence is not a competition between humans and machines, but rather a collaboration. Instead of fearing the advances of artificial intelligence, we should focus on developing complementary skills that require human judgment, empathy, and creativity. One must always remain curious and keep learning, for knowledge is the key to unlocking new opportunities. Embrace change and be adaptable, for the job market is constantly evolving. Finally, approach challenges with a growth mindset and see them as opportunities for personal development.

Using the voice of Arthur C Clarke:

As I've always maintained, the only way to discover the limits of what is possible is to venture past them into the impossible. In the age of artificial intelligence, we should not fear the rise of the machines but embrace it, for it can offer us a new path to knowledge and discovery. Instead of competing with AI, we should focus on developing skills that are complementary to it, such as our unique human abilities of creativity, intuition, and empathy. Learning must be a lifelong pursuit, and we must keep up with the latest advancements to stay relevant in the changing landscape of the job market. The ability to adapt and evolve is paramount in this new era, and we should see challenges as opportunities for growth and transformation.

And, finally, in the voice of Mahatma Gandhi

Artificial intelligence is a new force in our world, and while it may bring challenges, we must approach it with a peaceful and positive mindset. Rather than viewing AI as a threat, we should strive to develop skills that complement it, such as our unique human capacities for creativity, empathy, and compassion. Learning should be a lifelong pursuit, and we must stay curious and open-minded to the possibilities of AI. We should collaborate with AI, not compete with it, for only by working together can we achieve true progress. Let us embrace change and be adaptable, for the world is constantly evolving, and challenges are opportunities for growth. With a growth mindset and a spirit of cooperation, we can create a future where AI and humanity coexist in harmony.

Whether perfectly true to the originals or not, these variations were produced in a few minutes, far faster than a human could perform the same task. 

With AI the door is open to producing content in varied voices and perspectives at great speed.

Read full post...

Tuesday, April 04, 2023

2nd Australian Responsible AI Index launched - calls for government to regulate sooner rather than later

Today marked the unveiling of the 2nd Australian Responsible AI Index, accompanied by urgent appeals for the government to intervene and curb the potential misuse of artificial intelligence (AI). 

The Australian Financial Review provided comprehensive coverage of this critical topic, revealing that a mere 3% of Australian companies are managing the adoption and continuous use of AI in a responsible manner.

As AI permeates almost every facet of business operations, it is crucial that its management and regulation extend beyond vendors and IT teams, ensuring responsible policies are in place for both the business and society as a whole.

The Index report disclosed several key findings:

  • The average Responsible AI Index score for Australian organisations has remained stagnant at 62 out of 100 since 2021.
  • While a significant 82% of respondents believe they are adopting best-practice approaches to AI, a closer look reveals that only 24% are taking conscious steps to guarantee the responsible development of their AI systems.
  • There has been a growth in organisations with an enterprise-wide AI strategy linked to their broader business strategy, with the figure rising from 51% in 2021 to 60%.
  • Among those organisations, only 34% have a CEO who is personally committed to spearheading the AI strategy.
  • Organisations with CEO-led AI strategies boast a higher RAI Index score of 66, compared to a score of 61 for those without direct CEO involvement.
  • A total of 61% of organisations now recognise that the advantages of adopting a responsible AI approach outweigh the associated costs.

The Responsible AI Index serves as a timely reminder for the Australian government to act swiftly in the face of these findings, reinforcing the need for a more responsible approach towards AI implementation across the board.

Read full post...

Italy bans ChatGPT (over privacy concerns)

As the first major action by a nation to limit the spread and use of generative AI, Italy's government has taken the step to formally ban ChatGPT use not only by government employees, but by all Italians.

As reported by the BBC, "the Italian data-protection authority said there were privacy concerns relating to the model, which was created by US start-up OpenAI and is backed by Microsoft. The regulator said it would ban and investigate OpenAI 'with immediate effect'."

While I believe this concern is rooted in a misunderstanding as to how ChatGPT operates - it is a pre-trained AI, that doesn't integrate or learn from the prompts and content entered into it - given that OpenAI does broadly review this injected data for improving the AI's responses means there is enough of a concern for a regulator to want to explore it further.

Certainly I would not advise entering content that is private, confidential or classified into ChatGPT, but except in very specific cases, there's little to no privacy risk of your data being reused or somehow repurposed in nefarious ways.

In contrast the Singaporean government has built a tool using ChatGPT's API to give 90,000 public servants a 'Pair' in Microsoft Word and other applications they can use to accelerate writing tasks. The government has a formal agreement with OpenAI over not using any data prompts in future AI training.


What Italy's decision does herald is that nations should begin considering where their line is for AIs. While most of the current generation of large language models are pre-trained, meaning prompts from humans don't become part of their knowledge base, the next generation may include more capability for continuous finetuning, where information can continually be ingested by these AIs to keep improving their performance.

Specific finetuning is available now for certain AIs, such as OpenAI's GPT3 and AI21's Jurassic, which allows an organisation to finetune the AI to 'weight it' towards delivering better results for their knowledge set or specific goals. 

In government terms, this could mean training an AI on all of Australia's legislation to make it better able to review and write new laws, or on all the public/owned research ona given topic to support policy development processes.

It makes sense for governments to proactively understand the current and projected trajectory of AI (particularly generative AI) and set some policy lines to guide the response if they occur.

This would help industry develop within a safe envelope rather than exploring avenues which governments believe would create problems for society.

Read full post...

The ease at which bias creeps into AI

Removing AI bias is a critical component in the design and training of AI models, however despite the care taken, it can be incredibly easy for bias to creep in.

This is because we often don't see our own biases and, even at a macro level as a species, we may hold biases we are not aware of that fundamentally impact how our AIs perform.

A great example I came across the other week was in image generation AIs, when asked to create a selfie of a group of people. There is a tendency for AI models is to portray a smiling group, whatever their era or cultural background.

This shows a bias in that many groups in history traditionally don't smile in photos - however the western norm has largely become smiling, and so the training sets these AIs use are biased towards smiling, regardless of the culture or timeperiod of the people represented.

AI generated image of French WW1 soldiers selfie

Read full post...

Bookmark and Share