Denmark has passed a new law that will restrict access for children under 15 to social media, making it among the most aggressive in Europe on youth online safety. The restriction will be accompanied by a government-verified age verification tool as well as an option for parents to give their children aged 13 and 14 access after an assessment, according to reporting from the Associated Press.
What the Danish social media ban includes and covers
The measure covers popular social platforms — think TikTok, Snapchat, Instagram and Reddit — raising the minimum age in Denmark above the industry’s standard of 13 years. Today, most services rely on self-declared birthdates or flimsy checks that are easy to get around. In raising the floor to 15 while making parental authorizations formal matters for 13- and 14-year-olds, lawmakers are putting more responsibility for age verification on companies and, crucially, a national layer of verification.
Details are still being ironed out, such as when enforcement would begin and the identity of who will audit compliance. The government has indicated that it will develop its own app for age checks, a move which likely involves relying on Denmark’s established digital identity model where penetration of the national eID is considerable.
Enforcement and age checks for Denmark’s new rules
Age verification is the fulcrum. The plan from Denmark sees such state involvement in a system, as opposed to social networks self-enforcing, which would ensure that someone can prove how old they are without revealing any more personal information than is required. Privacy advocates will be watching closely for protections against data retention or function creep, particularly with rules in Europe like the GDPR and Digital Services Act that have already restricted targeted advertising to minors and required risk assessments of harm to youth.
Big Tech has also tried drawing on its own tools. Meta has tested age-estimation technology on Instagram with facial analysis from Yoti; other platforms combine signals like engagement patterns and friend networks. Such systems have gotten better but remain plagued by questions of accuracy and fairness. But a government-established standard could ease uncertainty among platforms, while also potentially shifting liability and drawing legal attention if false positives result in the exclusion of older teens or true underage users are not caught.
A Synchronizing Global Push To Restrict Youth Social Media
Denmark’s move comes in the context of a global reshuffling over access by the young. Australia, meanwhile, is implementing a blanket ban on minors nationwide from December that could see fines of up to AUD 50m for those who do not comply. There is no federal standard in the United States, but states like Utah and Arkansas have enacted laws that mandate parental consent and tighter age verification; many of these are now bogged down in court over First Amendment and privacy issues. The United Kingdom’s Online Safety Act requires more child protections, and pushes platforms toward stronger age assurance.
Regulators say they are responding to what is already a de facto usage reality. According to Pew Research Center, 95% of U.S. teens use YouTube; and its usage is similarly high elsewhere, followed by TikTok (67%), Instagram (62%) and Snapchat (59%), with 46% saying that they are online “almost constantly.” Ofcom also notes high rates of social media take-up among 12–15-year-olds in the U.K., and a significant number of younger children circumventing age gates. Policymakers say voluntary measures have failed to keep up with risks.
What The Research Tells Us — And What It Doesn’t
Evidence on harm is nuanced. The U.S. Surgeon General in 2023 issued an advisory that said heavy use of social media can be associated with depression and anxiety symptoms, disruptions in sleep and body image concerns, especially among adolescent girls. The American Psychological Association in 2023 recommended age-appropriate design, parental oversight and in-app controls to limit exposure to bullying and problematic content.
Meanwhile, big studies by researchers at the Oxford Internet Institute and other scholars indicate that average effects on well-being are modest; they also differ significantly from one person to another, from one context to another and from one platform to another. The mixed picture is feeding a policy divide: Some governments are stressing precautionary boundaries for younger users, while others are pressing platforms to customize experiences and empower parents rather than impose firm age lines.
What It Means for Platforms, Parents and Regulators
If Denmark’s national verifier is combined with enforceable duties on platforms, it could become a model invoked in other EU countries, particularly at a time when the European Commission is increasing scrutiny under the Digital Services Act. A unified approach would also help stem fragmentation for businesses that are now contending with a patchwork of state and national rules.
For parents, the Danish model makes consent for 13–14-year-olds more formal and provides families with a clearer role in assessments. Schools and pediatric associations will play a crucial role in shaping those assessments, from digital literacy to mental health screening. The most practical changes for teens may appear in account creation flows and identity checks at sign-up, as well as perhaps (for the shiftiest companies) when it comes to app store gatekeepers.
The next test will be in execution: the robustness of the verification system, how platforms adjust onboarding and whether enforcement can actually effectively curb underage use without introducing new privacy risks. With children’s mental health at the heart of the debate, Denmark is wagering that stricter age limits — underpinned by verifiable identity — will shift digital environments toward safety. The upcoming rollout will test whether that bet can be scaled without unintended consequences.