X's response to Brazilian authorities regarding the Grok sexual deepfake scandal is revealing. The company:
- Acknowledged vulnerabilities, including users attempting to bypass existing safeguards using "specific prompts," and the challenge of distinguishing different levels of clothing. It also stated that this forced it to prevent the generation of content of minors, and, later, of adults.
- Following a familiar playbook, tried to fragment responsibility: claimed that xAI, which operates Grok, is a separate company from X, and that @Grok was just another user account subject to the same rules as anyone else;
- Claimed it had already taken measures to prevent the generation of sexualized images without consent, as well as procedures to identify and remove such content, but provided no proof or technical documentation of this.(Meanwhile, independent tests and reports showed the problem persisted);
- Tried to keep these communications confidential, but the Brazilian government rejected this and made the documents public.
In my first piece as a Tech Policy Press fellow, with contributions from Yasmin Curzi, Mariana Valente and Luã Cruz, I discuss Brazil's escalating response to X – and what the Grok case means for regulation in the country, particularly regarding the platform liability regime and the AI Law.