The Gap Between Intentions and Outcomes

Analysing Australia’s Social Media Ban for Children

Authors

In late 2024, the Australian Parliament introduced the Online Safety Amendment (Social Media Minimum Age Bill) 2024, aimed at restricting access to social media platforms for underage users.This bill (now an Act) appears to adopt a structurally coherent and sound policy while simultaneously failing to address the root of the issue– children’s prolonged exposure to violent and unsuitable content, and the extensive durations that they spend on these platforms. The Act’s effectiveness in executing on its intentions appears to be questionable. In particular, two facets of the policy catch the eye: The sole burden of criminality falls on the platform rather than the user, and the issue of being logged in vs being logged out.

Here’s another perspective by Anwesha Sen and Pranay Kotasthane

Decriminalisation and the Absence of Behavioural Friction

At first glance, the Act comes off as nothing but a half measure. Crucially, the government itself has claimed that the Act hasn’t been passed with the intent to guarantee full compliance. The intent, rather, is focused on two things: setting social parameters and establishing a precedent. When we judge the policy on these terms, it seems coherent and defensible. Criminalising children for accessing social media sites would be ridiculous and wildly impractical. Additionally, a large proportion of the risks posed by social media are associated with the platform itself, not the user. However, even policies that do not  desire absolute compliance must seek to implement necessary reforms and actually influence consumer behaviour, and demonstrably diminish harm.

Unfortunately, the Act appears to be focused more on ethicality rather than efficacy. This is proven with the clause that exempts users from any harm, inevitably resulting in certain people circumventing this law with no real consequences, resulting in the Act not having its intended effect. The government itself acknowledges that there will be lapses of security that allow some individuals to “slip through the cracks”; however, this contradicts the government’s same stance on companies, expecting them to manage non-compliance wholly.

This assumes that companies can fully remove circumvention even when users face no deterrence. In this regard, enforcement ability is constrained less by the company’s capacity than by user incentive. The policy’s aim remains inconsistent. The same issue of decriminalisation gives rise to another issue: the lack of friction. Since users have no real consequences for bypassing the law, there is very little behavioural friction for users present through this implementation. Users have the opportunity to evade the law in a variety of ways, through faking their age as well as the use of VPNs, which are especially common nowadays.

Lack of Clarity in Definitions and Platform Exclusions

Furthermore, the law appears to exclude a plethora of online platforms and apps that could function as social media apps under its own definition. The law states that what makes an age-restricted social media platform is:

“An electronic service that satisfies the following conditions: (i) the sole purpose, or a significant purpose, of the service is to enable online social interaction between 2 or more end-users; (ii) the service allows end-users to link to, or interact with, some or all of the other end-users; (iii) the service allows end-users to post material on the service.”

However, the government still chooses to exclude messaging apps and online gaming services that fall under the statutory definition of a restricted social media platform. Discord has a huge user base with gaming services, for example. An alternative instead of complete criminalisation or decriminalisation could be non-punitive penalties. Apps and platforms could potentially introduce warnings, parental notifications, or temporary restrictions. Though parental notifications assume active oversight in many households and could raise privacy concerns. Temporary bans, as well, are easily bypassed. However, at least non-punitive penalties acknowledge the user side. Therefore, the issue of complete ecriminalisation results in the policy coming across and operating as a half measure, as it is unable to carry out its intention effectively.

“Reasonable Steps” and the issue of Enforcement

Finally, shifting the entire burden of criminalisation onto companies poses another challenge when it is coupled with the policy’s vague definition of “reasonable steps.” The policy states,

“The Act introduces an obligation on providers of an age-restricted social media platform to take reasonable steps to prevent age-restricted users from having an account with the platform.”

The Act doesn’t give any specifications on what it believes to be “reasonable”, though the subjective nature of “reasonable” is a common test in Australian law. Furthermore, this can also mean that platforms are judged based on the amount of effort they put into their procedures and systems, rather than their effects. In a system where users face no consequences and circumvent the system, platforms may be incentivised to perform the bare minimum, instead of a solid system to detect underage users, enfeebling the policy’s capacity to reduce underage children on these websites.

Logged-In vs Logged-Out Access: A Nuanced but Limited Safeguard

Interestingly, another one of the Act’s major focuses – the regulation of account ownership reflects an understanding with more nuance when it comes to the related risks of social media. The government made the intriguing choice of allowing logged-out accounts to view content without having to verify age. Most of the features of popular apps, such as Instagram or TikTok, that are analogous with negative consequences, such as personalised algorithmic feeds, notifications, and validation through engagement, are only present in the “logged in” state. The government states:

“This will help to mitigate the risks arising from harmful features that are largely associated with user accounts, or the ‘logged-in’ state, such as persistent notifications and alerts which have been found to have a negative impact on sleep, stress levels, and attention.”

This choice does align with what the government plan of setting social norms, as the government primarily targets the social media app rather than the user itself. Logged-out access prevents users from interacting with others or interacting with the content they consume on the platform. This aligns with the definition of an age-restricted social media platform and prevents any algorithmic tracking. Logged out access also avoids the uncomfortable possibility of infringing on freedoms, while still indicating that certain forms of interaction on these platforms are not age-appropriate.

In this post, Pranay suggests that the ban’s ‘biggest value might be just this: a signal to parents, platforms, and other governments that social media use requires moderation.’

While all this is true, the choice to allow access to logged-out accounts significantly curtails the Act’s ability to reduce exposure to disturbing content. Even when logged out, a user is still exposed to a diverse array of content including influencer culture and platform-curated material. The user’s exposure may lack personalisation; however, it would be shortsighted to assume that it wouldn’t come with many of the same disadvantages as the logged-in version. It would still be able to sustain people’s attention and maintain social relevance, hampering the Act’s ability to reduce harm. The interplay between both the factors – Decriminalisation and logged-out access results in underage users being undeterred from consuming content in a logged-out state, but also being able to easily circumvent the law due to the lack of criminalisation.

Relative to the status quo, the Act represents a small shift toward a safer future. However, the Act may reduce account ownership to these banned sites among young people, without actually changing the amount of exposure and social participation that young people will undergo through. As a result, though the long-term effects remain uncertain, given the recency of the legislation, one can evaluate this policy based on its failure to provide enough incentive and friction for young users to stop using these platforms.

International Applicability and the Case of India

Indeed, while Australia’s legislation is the first of its kind to legislate a minimum age for social media use, many other countries are considering similar bans, following Australia’s example. However, the applicability of the laws in other countries is far from straightforward. Australia’s policy is constructed by three assumptions: platforms are solely responsible for enforcement of the law, access can be regulated without invasive surveillance, and partial compliance and norm–setting can still have a meaningful impact on the behaviour of children, reducing their exposure to harm.

Spain is also considering a social media ban on under-16s, following the Australian example

However, I believe they become significantly weaker when they are integrated into larger, diverse markets of developing countries. India, for example, possesses a large youth population, with lower levels of media literacy and more account sharing. India has millions of users from various socio-economic backgrounds who all have varying device access. Low-income households in India are an example of this, where account sharing is more prevalent.

It may prove more effective for the urban middle class and high class; however, they represent a smaller proportion of the population (36.87%), with the majority of states having a larger rural population than urban. There is also the question of the reliability of age assurance for India’s population. In addition, India’s social media market is highly competitive, and has an emphasis on mobile phones, even more so than in Australia.

Furthermore, platforms have larger incentives to reduce enforcement in India as it is a much larger market than Australia, and thus, risk larger losses in retention compared to Australia. However, given that most major platforms are American and India represents a strategically important market, the Indian government may have more leverage compared to the Australian one. As a result, regardless of the potentially higher implementation and engagement costs, platforms may be inclined to comply with this kind of regulation, even if it is against their short-term commercial interests.

If they choose not to, and adopt minimal measures that satisfy legal requirements, then they may retain more users. This is the same problem that could occur in Australia, except it is exacerbated in a country like India.

Finally, India’s regulatory framework prioritises controlling behaviour and content afterwards rather than preventing the content from being produced on platforms in the first place. Previous laws underscore intermediary liability and post-hoc moderation rather than pre-emptive efforts. This is important, as placing an Australian-style age restriction on India would require a change in philosophy and structure of the regulation process when it comes to regulating online behaviour. Thus, the approach that Australia has taken with the age restriction policy won’t be applicable in developing countries with heterogeneous population groups, as it would be difficult to implement, and there would be a cultural divide between the way technology is used and accessed in a developing country versus a developed country.

In conclusion, Australia’s Online Safety Amendment Act is one of the first policies that recognises the potential risks posed to young users through unrestricted access of social media, though, its practical effectiveness is still uncertain. Now, the Australian government believes that this will set a precedent and set clear parameters that will allow for the right outcomes.

However, even under this new intention, the Act may still fail to influence user behaviour and demonstrably reduce harm. This is caused by the Act’s insistence to solely put the burden of responsibility, criminality and punishment on them, as well as the Act’s poor definition of what a restricted social media platform is. This decision, coupled with the lack of consequences for users, results in very little behavioural friction being created. This leaves the policy to be exploited not just to individual accounts of circumvention, but systemic ones.

The Act’s applicability internationally is questionable. In developing countries like India, the Act’s idea of partial compliance of users is even less likely to hold true due to the population’s varying nature. Thus, although the Act does set a precedent, its real-world impact is entirely based on a variety of conditions that may only operate in ideal conditions and may not be effective when it is applied in diverse markets.

Agastya is a Grade 11 student at The International School Bangalore with a strong academic interest in economics and public policy.