Bluesky has published its first comprehensive transparency report, laying out how the decentralized social network handled safety, enforcement, and compliance across its fast-growing service. The document details a sharp rise in user reports and a fivefold jump in legal demands, alongside a strategy that leans heavily on labeling content rather than outright removals.
What the report’s numbers show about growth and safety
Bluesky says its network expanded from 25.9 million to 41.2 million accounts during the reporting period, counting both users on the company’s own infrastructure and those hosted elsewhere via the AT Protocol. Activity surged with 1.41 billion posts in the period, representing 61% of all posts ever made on the service; 235 million included media, accounting for 62% of all media posts to date.

User reporting scaled with growth. Total reports rose from 6.48 million to 9.97 million, a 54% increase that the company says tracks user growth. About 3% of users — roughly 1.24 million people — submitted reports. Top categories were “misleading” at 43.73% (including 2.49 million reports for spam), “harassment” at 19.93%, and sexual content at 13.54%. An “other” bucket captured 22.14% of reports that did not fit predefined categories like violence, child safety, or self-harm.
Within harassment, the largest defined slice was hate speech with about 55,400 reports, followed by targeted harassment at roughly 42,520, trolling at about 29,500, and doxxing at around 3,170. Most harassment reports, however, sat in a gray zone of antisocial behavior that is uncivil but not explicitly policy-breaking — a familiar challenge in trust-and-safety practice.
Sexual content reports were dominated by mislabeling, with 1.52 million cases where adult posts lacked proper metadata. Smaller but sensitive segments included nonconsensual intimate imagery (about 7,520), sexual abuse content (around 6,120), and deepfakes (over 2,000). Violence-related reports totaled 24,670, including roughly 10,170 for threats or incitement, 6,630 for glorification of violence, and 3,230 for extremist content.
Automated systems also played a sizable role, flagging 2.54 million potential violations. Bluesky reports that after introducing friction for toxic replies — hiding them behind an additional click, an approach similar to what’s used on other major platforms — daily reports of antisocial behavior fell 79%. Overall reporting per 1,000 monthly active users declined 50.9% across the period.
Labels over bans signal a moderation philosophy
The report highlights a clear emphasis on labeling content to give users control over their feeds. Bluesky applied 16.49 million labels, up 200% year over year, primarily for adult or suggestive content and nudity. Account takedowns also increased, but at a slower clip, rising 104% from 1.02 million to 2.08 million.

Enforcement still included tough measures where needed: 3,192 temporary suspensions and 14,659 permanent removals for ban evasion, with much of the focus on inauthentic behavior, spam networks, and impersonation. In a nod to the Santa Clara Principles on Transparency and Accountability in Content Moderation, a labeling-first approach can balance speech and safety by preserving context while limiting reach. It also meshes with Bluesky’s decentralized design, where user-driven filters and composable moderation are core features.
Legal demands climb as compliance obligations loom
Government and legal scrutiny intensified. Bluesky received 1,470 requests from law enforcement, regulators, and legal representatives, up from 238 in the prior period — a more than fivefold jump. For a network its size, that is a notable escalation and reflects a broader regulatory environment in which platforms are expected to respond quickly and document outcomes.
Although the report summarizes volumes rather than detailed outcomes, the direction aligns with global expectations. The European Union’s Digital Services Act formalizes transparency reporting norms, while civil-society groups such as the Electronic Frontier Foundation continue to press for clarity on how platforms handle government requests, emergency disclosures, and user notice. As Bluesky grows, pressure to break out denials, appeals, and jurisdictional details will increase.
The Decentralization Test For Trust And Safety
Bluesky’s AT Protocol allows accounts to be hosted across multiple providers, complicating moderation because enforcement and labeling must work across different services. The company says it removed 3,619 accounts tied to suspected influence operations, which it characterizes as largely Russia-linked. That figure underscores the need for coordinated defenses against cross-network manipulation campaigns — a problem already well documented by independent researchers and election-integrity groups.
The company also compares current outcomes with earlier moderation snapshots, which listed 66,308 account takedowns by human moderators and 35,842 by automated tools, plus a smaller number of record-level removals. The trend now suggests more proactive automation paired with user-facing labels and friction. This is consistent with the trust-and-safety field’s “reduce reach, add context, then remove if necessary” playbook advocated by practitioners in the Trust and Safety Professional Association.
As a first full transparency baseline, the report offers uncommon visibility for a decentralized service: growth metrics, report volumes, enforcement choices, and the trade-offs behind them. The next test is consistency — continued disclosure on appeals, error rates, and government request outcomes — the kind of granular accounting that turns a one-off report into long-term accountability.
