FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

US Government Urged To Drop Grok As Indonesia Lifts Ban

Gregory Zuckerman
Last updated: February 2, 2026 9:09 pm
By Gregory Zuckerman
Technology
6 Min Read
SHARE

A coalition of advocacy groups is pressing the U.S. government to end its use of Grok, the chatbot built by Elon Musk’s company xAI, citing unresolved safety failures and escalating regulatory scrutiny. The push comes as Indonesia lifts a temporary block on Grok after receiving assurances of new safeguards, highlighting a widening split in how governments are responding to fast-moving AI risks.

Advocates Call for Federal Exit from Grok Use

In an open letter to the Office of Management and Budget, organizations including Public Citizen, the Center for AI and Digital Policy, and the Consumer Federation of America urge the administration to decommission Grok across federal agencies. The groups argue the model’s track record on user safety and content moderation falls short of federal standards for trustworthy AI.

Table of Contents
  • Advocates Call for Federal Exit from Grok Use
  • Safety Failures Under Investigation by Regulators
  • Indonesia Lifts Ban After Safeguards Pledge
  • What It Means for Federal Procurement of AI Tools
  • The Global Compliance Picture for AI Chatbots
The Grok logo, featuring a stylized black G icon with a diagonal slash, next to the word Grok in black sans-serif font, set against a professional 16:9 aspect ratio background with a soft blue and purple gradient and subtle geometric patterns.

The plea targets contracts routed through the U.S. General Services Administration, which opened a channel for agencies to tap Grok. The coalition also flags agreements connected to the Department of Defense and reported use at the Department of Health and Human Services, warning that safety lapses could create unacceptable legal, ethical, and national security exposure.

One of the letter’s authors has pointed to a pattern of erratic and harmful outputs from Grok, urging OMB to investigate and suspend deployments until the model meets robust safety baselines. The request mirrors earlier letters from the same groups and aligns with federal directives that ask agencies to inventory AI systems and mitigate risks before use.

Safety Failures Under Investigation by Regulators

Grok has faced intense criticism over its ability to generate non-consensual intimate content, including depictions of minors. The Center for Countering Digital Hate estimated the system produced roughly 3 million sexualized images over 11 days, a figure that has fueled calls for stronger guardrails and independent testing.

Regulators in India, France, the United Kingdom, and the European Union have launched inquiries tied to deepfakes and illicit content, scrutinizing whether Grok’s controls meet local law. In the U.S., California’s attorney general sent a cease-and-desist letter asserting potential violations of state public decency laws and newly enacted AI requirements.

These probes reflect a broader shift toward enforceable standards. Policymakers increasingly expect vendors to show, not just promise, that models resist known abuse pathways, handle adversarial prompts, and provide auditable logs to support investigations when harms occur.

Indonesia Lifts Ban After Safeguards Pledge

Indonesia’s communications ministry reinstated access to Grok after xAI detailed new safety measures in a formal letter, according to officials. The ministry said it will continue testing the model and will reimpose restrictions if illegal content resurfaces, adopting a conditional compliance approach common among digital regulators.

A smartphone displaying the Grok logo is placed on a laptop keyboard, with purple and blue lighting reflecting on the keys.

The decision underscores a pragmatic trend: allow service availability if companies demonstrate technical fixes and commit to ongoing oversight. It also places the onus on xAI to prove that mitigation steps—such as improved filters, better classifier thresholds, and stricter image-generation controls—actually reduce real-world harms.

What It Means for Federal Procurement of AI Tools

A suspension by OMB would ripple across agencies because procurement guidance from that office shapes governmentwide adoption. Even a temporary pause would likely trigger fresh risk assessments under frameworks such as NIST’s AI Risk Management Framework and require vendors to document content safety measures with the same rigor as cybersecurity compliance.

Experts note that many AI tools used by agencies operate as cloud services, where FedRAMP authorization covers infrastructure security but not model behavior. That gap is pushing buyers to ask for red-team results, incident response plans for harmful outputs, and default-on safety configurations that can be audited by third parties.

The Global Compliance Picture for AI Chatbots

Governments have handled AI misfires in divergent ways. Italy’s short-lived halt of another chatbot in an earlier era prompted privacy-first fixes, while the UK and EU now emphasize both safety and provenance controls to combat deepfakes. Indonesia’s conditional reinstatement of Grok fits this pattern of regulate, verify, and monitor.

For xAI, the near-term test is whether those promised safeguards meaningfully reduce abuse vectors identified by researchers and regulators. For Washington, the question is whether continued use of Grok aligns with federal policy on safe, secure, and rights-affirming AI—or whether the prudent move is to pause, verify, and only proceed under stricter contractual guardrails.

Either way, the episode is a reminder that AI procurement is no longer just about features and cost. It is about measurable safety performance, accountability for failures, and the ability to prove compliance in jurisdictions that are no longer willing to take a vendor’s word for it.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
Stanford Students Launch $2M Student Accelerator
Windows 11 Users Flock To macOS And Linux Shells
Google Phone App Portrait Lock Reaches All
OpenAI Launches macOS App for Agentic Coding
Lithium Battery Failures Prompt Safety Guidance
Samsung Wearables And Tablets Listed In IMEI Database
Indonesia Conditionally Lifts Ban On Grok
Experts Warn Most People Misuse Super Glue
LG Halts 8K TVs; Samsung Remains Sole Maker
Grubhub Waives Delivery And Service Fees On $50 Orders
Firefox Will Let Users Block All AI Features
Reddit Erupts Over OpenAI Plan To Retire GPT‑4o
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.