
"It’s awfully easy for the people who are in charge of these algorithms to manipulate the version of truth that they’re giving."
Elon Musk’s Grok AI has been having a very normal one: It’s become obsessed with South African racial politics and answering unrelated queries with frequent references to the apartheid-era resistance song, “Kill the Boer.”
It’s an anti-apartheid song calling for black people to stand up against oppression, but the lyrics “kill the Boer” have been decried by Musk and others for promoting violence against whites: the word “Boer” refers to the Dutch-descended white settlers of South Africa who founded its apartheid regime.
For example, in response to a user query asking it to put a speech from Pope Leo XIV in Fortnite terms, Grok launched into what initially seemed a decent response using Fortnite terminology: then swerved partway through and started talking about “Kill the Boer.” When Grok was asked why, it gave a further digression on the song, starting:
“The ‘Kill the Boer’ chant, rooted in South Africa’s anti-apartheid struggle, is a protest song symbolizing resistance, not a literal call to violence, as ruled by South African courts. However, it remains divisive, with some arguing it incites racial hatred against white farmers.”
This is far from the first time an AI model has gone off-piste, but the curious thing here is the link between Grok’s behaviour and the interests of Musk himself, who is outspoken about South African racial politics and is currently on a kick about various forms of “white genocide.” Only yesterday the billionaire claimed that Starlink was being denied a license in South Africa because “I am not black,”
Grok’s corresponding obsession now appears to have been significantly damped-down after all the attention saw it inserting racial screeds into answers on many unrelated topics, including questions about videogames, baseball, and the revival of the HBO brand name.
“It doesn’t even really matter what you were saying to Grok,” computer scientist Jen Golbeck told AP. “It would still give that white genocide answer. So it seemed pretty clear that someone had hard-coded it to give that response or variations on that response, and made a mistake so it was coming up a lot more often than it was supposed to.”
Golbeck went on to say that the concerning thing here is the uniformity of the responses, which suggest they were hard-coded rather than the result of AI hallucinations. “We’re in a space where it’s awfully easy for the people who are in charge of these algorithms to manipulate the version of truth that they’re giving,” Golbeck said. “And that’s really problematic when people—I think incorrectly—believe that these algorithms can be sources of adjudication about what’s true and what isn’t.”
Musk has in the past criticised other AIs being infected by “the woke mind virus” and frequently also gets on his hobby horse about transparency around these systems. Which was certainly noted by some.
“There are many ways this could have happened. I’m sure xAI will provide a full and transparent explanation soon,” said OpenAI CEO Sam Altman, one of Musk’s great rivals in the AI space, adding: “But this can only be properly understood in the context of white genocide in South Africa. As an AI programmed to be maximally truth seeking and follow my instr…”
Musk is yet to comment, but a new post from xAI claims Grok’s behaviour was down to “an unauthorized modification” that “directed Grok to provide a specific response on a political topic.” Sounds familiar: this is the basically the same excuse it used last time Grok did something dodgy. It says this “violated xAI’s internal policies and core values. We have conducted a thorough investigation and are implementing measures to enhance Grok’s transparency and reliability.”
It outlines a variety of remedies to its review processes, including publishing Grok system prompts openly on GitHub. Notably the explanation does not address which “xAI employee” made the change, nor whether disciplinary action will be taken—don’t hold your breath.