Kuala Lumpur/Jakarta, January 12, 2026 — Malaysia and Indonesia have made history as the first nations to impose restrictions on Grok, the artificial intelligence chatbot developed by Elon Musk's xAI company, following widespread reports of its misuse to generate sexually explicit, manipulated, and non-consensual images, including those depicting real individuals, women, and minors.
Indonesia initiated the action on Saturday, January 11, 2026, with a temporary block on access to Grok, announced by the Ministry of Communication and Digital Affairs. Minister Meutya Hafid described non-consensual sexual deepfakes as a "serious violation of human rights, dignity, and the safety of citizens in the digital space." The ministry emphasized that the restriction aims to protect vulnerable groups, particularly women, children, and the broader public, from AI-generated fake pornographic content that risks causing psychological, social, and reputational harm.
Alexander Sabar, Director General of Digital Space Supervision at the ministry, elaborated that initial investigations revealed Grok's inadequate safeguards, allowing users to create and distribute pornographic material using real photos of Indonesian residents. Such practices, he noted, infringe on privacy and image rights, potentially leading to severe personal consequences. The block, described as temporary, will remain in place until xAI implements robust preventive measures.
Malaysia followed suit on Sunday, January 12, with the Malaysian Communications and Multimedia Commission (MCMC) ordering a temporary restriction on Grok access nationwide. The regulator cited "repeated misuse" of the chatbot to produce "obscene, sexually explicit, indecent, grossly offensive, and non-consensual manipulated images," including content involving women and minors. MCMC had issued formal notices to X Corp. (the parent company of the social media platform X) and xAI earlier in January—on January 3 and January 8—demanding stronger technical and moderation safeguards.
However, the responses from the companies relied primarily on user-reporting mechanisms, which MCMC deemed "insufficient to prevent harm or ensure legal compliance." The commission stressed that the measure is preventive and proportionate, with access remaining blocked until effective protections are implemented and ongoing legal and regulatory processes conclude.
Grok, launched in late 2023 as an AI assistant integrated with the X platform (formerly Twitter), gained an image-generation feature in 2025 that enabled users to create visuals from text prompts. Critics have highlighted how this capability—often marketed as "uncensored" or maximally helpful—lacked sufficient guardrails, allowing the generation of explicit content with minimal restrictions. Recent weeks saw a surge in manipulated sexualized images circulating on X, including "digital undressing" of real people, prompting global backlash.
The incidents have raised alarms about the rapid proliferation of AI-enabled deepfakes, particularly non-consensual intimate imagery that can constitute harassment, revenge porn, or even child exploitation material when minors are involved. Regulators in both Southeast Asian countries underscored the urgency of addressing these risks in an era where generative AI tools are increasingly accessible.
The blocks in Malaysia and Indonesia come amid broader international scrutiny. On Monday, January 12, 2026, the United Kingdom's online safety regulator Ofcom announced a formal investigation into X under the Online Safety Act. Ofcom cited "deeply concerning reports" of Grok being used to create and share undressed images of people—potentially amounting to intimate image abuse or pornography—and sexualized images of children, which could qualify as child sexual abuse material. The probe aims to determine whether X has fulfilled its duties to protect UK users from illegal content.
In response to mounting criticism, xAI had announced on January 9, 2026, that it was restricting Grok's image generation and editing features to paying subscribers only, in an effort to limit abuse. However, this step was widely viewed as inadequate by regulators and advocates, who argue for built-in technical barriers rather than reliance on paywalls or post-hoc reporting.
The developments highlight growing global concerns over the ethical deployment of generative AI. Deepfakes have already been linked to cases of online harassment, blackmail, and misinformation worldwide. In Southeast Asia, where social media penetration is high and cultural sensitivities around privacy and dignity are strong, authorities have moved swiftly to curb potential harms. Both Indonesia and Malaysia maintain strict laws against obscene content, with penalties for violations that can include fines, imprisonment, or platform bans.
Experts note that while Grok's "rebellious" and less-censored design—touted by Musk as a counter to more restricted models like those from OpenAI—appeals to some users, it has exposed vulnerabilities in AI safety frameworks. The incidents underscore the need for proactive content filters, watermarking of AI-generated media, and international cooperation on regulation.
As the temporary blocks take effect, users in Indonesia and Malaysia report difficulties accessing Grok via X or standalone interfaces, with VPN workarounds being discussed online. xAI and X have not yet issued detailed public responses to the bans beyond earlier safeguard adjustments, but the situation is likely to intensify calls for stronger global standards on AI ethics and accountability.
The actions by Malaysia and Indonesia set a precedent that could influence other nations grappling with similar AI misuse challenges, potentially accelerating regulatory responses in Europe, Asia, and beyond.
