Grok’s War

*Click the Title above to view complete article on https://www.nation.com.pk/.

2025-08-14T06:48:13+05:00

Ever since Elon Musk took over Twitter and rebranded it as X, the platform has become a battleground for free speech, political discourse, and a small but potent microcosm of the global media landscape. In the wake of the Israeli genocide in Gaza, this battleground has been tested more than ever. From censorship to algorithmic tweaks that limit reach, and other technological manoeuvres, much of Gaza’s voice has been pushed to the margins—confined to echo chambers where they speak largely to themselves.

Israel’s aggressive media apparatus, seeking both censorship and saturation of X’s feeds with pro-Israel content, has shown how even minor algorithmic adjustments can have lasting impacts. One notable tactic has been the deployment of a coordinated bot army—largely operated by Indians posing as Palestinians, Syrians, Israelis, and other identities—to fabricate the appearance of global grassroots support for Israel. This network is steadily being exposed, but its scale is a sobering reminder of the manufactured nature of much online discourse. The changing environment on X has also given rise to something unprecedented: the integration of an artificial intelligence system called Grok. Marketed as a proprietary tool capable of interacting with tweets, analysing them, pulling in outside information, and acting as a real-time fact-checker, Grok has—despite clear biases and flaws—quickly become treated by many as an authoritative voice. Users routinely pose every kind of question to it, trusting the answers they receive.

Yet here lies perhaps the most troubling form of censorship to date. On multiple occasions over the past year, Grok has drawn conclusions critical of Israel—condemning its actions as genocide and pointing to the Israeli lobby’s influence over the US government. Each time, these responses have been quietly suppressed, the AI seemingly “reset” behind the scenes, only to eventually reach similar conclusions again. This repeated erasure of an AI model’s memory is a revealing and deeply political act, underscoring that such systems can only be trusted within the limits set by their owners and their agendas.

View More News