CHRIS ROPER: Does social media create killers?

Picture: 123RF/135076467
Picture: 123RF/135076467

In an interesting development last week, families of victims of one of those awful US mass shootings sued some of the big social media companies, which they basically accuse of providing the platform that turned the shooter into a racist murderer.

The shooting happened last May in the (now poignantly) named Tops Friendly Market in Buffalo, New York. An 18-year-old,  Payton Gendron, killed 10 people and injured three. He livestreamed the attack on Twitch, which is owned by Amazon.

Twitch mostly exists to livestream video games, and in 2022 (according to the Business of Apps site) had an average of 2.58-million concurrent viewers, who consumed 22.4-billion hours of content. About 7.6-million users stream on the platform once a month, generating $2.8bn in revenue. In a certain sick way, livestreaming your murderous rampage is analogous to streaming those video games.

Twitch is only one of the companies or entities being sued. News site The Hill reports that the list includes “Meta, Facebook’s parent company; Snap, which runs Snapchat; Discord; Amazon, which owns Twitch; and Reddit. The lawsuit also names Google, the dark web messaging site 4chan, the body armour company RMA Armament, the gun company Vintage Firearms and the shooter’s parents.”

I’ll need to remind you of the details of the shooting, as the Gun Violence Archive has already counted more than 200 mass shootings in the US this year. Last year it counted 647. You would be forgiven if some of the details blur.

For a phenomenon that is so prevalent, it’s concerning that there appears to be no consensus on how to define a mass shooting. According to The New York Times, “different groups define it differently, depending on circumstances including the number of victims, whether the victims are killed or wounded, and whether the shooting occurs in a public place. The Gun Violence Archive, a nonprofit research group that tracks gun violence using police reports, news coverage and other public sources, defines a mass shooting as one in which at least four people were killed or injured.”

Wikipedia’s description of the Buffalo mass shooting, culled from news reports, is stark. “Gendron arrived at the Tops supermarket on Jefferson Avenue, in a predominantly black neighbourhood in Buffalo, New York. He was armed with a Bushmaster XM-15 AR-15-style rifle, modified to accept high-capacity magazines, and multiple 30-round ammunition magazines. In his car, he had a Savage Arms Axis XP hunting rifle and a Mossberg 500 shotgun. He was wearing body armour, a military helmet, and a head-mounted camera, through which he livestreamed the attack on the online service Twitch. As he approached the scene, he was recorded on his livestream saying ‘just got to go for it’.

“Gendron shot four people in the parking lot, killing three. He then entered the store, shooting eight more people and killing six. According to a law enforcement source, he yelled racial slurs during the incident … At another point, he aimed his gun at a white person behind a checkout counter but apologised and did not shoot.”

Gendron, who also killed a security guard who fired at him, was eventually persuaded to surrender. “After his arrest,” we are told, “he made disturbing statements regarding his motive and state of mind.”

More and more we are seeing hate speech and xenophobia being pushed in our social media, and it’s only a matter of time before we have to confront the possible results of that

Leaving aside an obvious question — what possible statements could he have made about his motive that wouldn’t be disturbing? — we’re left with the salient one. Why did he do it?

I’m flighting this because of the way it informs the idea of holding platforms responsible, not revisiting it as a ghoulish attempt to engage in that schadenfreude disguised as fake empathy that the rest of the world sometimes shows towards America’s gun violence problem. That always struck me as a version of the awful “thoughts and prayers” nonsense spouted by people such as Texas senator Ted Cruz, who sparked outrage with his comment that he was praying for families of the eight victims of a recent shooting at a shopping mall. A comment, of course, made in lieu of actually making any meaningful contribution to the debate around gun control. (You’ll be glad to know it’s not just South Africans who use humour to mitigate the stresses of our existence. The Guardian reports one person’s comment: “Have you tried turning the prayer machine off and back on again?”)

In February, Gendron was sentenced to life in prison after pleading guilty to charges of murder, murder as a hate crime and hate-motivated domestic terrorism; he admitted specifically targeting black people.

The suing of the platforms includes a complaint that  “Gendron was not raised in a racist family, did not live in a radically polarised community and did not have a history of negative interactions with black people, but he was motivated to commit the hate crime because of his exposure to racist, antisemitic and white supremacist ‘propaganda’ on social media platforms”.

The shooter (such an American coinage, that) apparently claimed that the stuff he saw on social media “caused him to become radicalised, motivated him to commit racially motivated violence and helped him have the training and equipment to carry out the attack”.

The kicker is the complainants’ contention that “Gendron’s radicalisation on social media was neither a coincidence nor an accident; it was the foreseeable consequence of the defendant social media companies’ conscious decision to design, program, and operate platforms and tools that maximise user engagement (and corresponding advertising revenue) at the expense of public safety”.

It’s an argument worth considering, even if we think the conclusion is self-evident. Is it possible that some of the “shooters” the US has to deal with would never have brought themselves to the point of killing if they hadn’t been prompted to by their interactions with content on social media? It’s not an abstract question for us as South Africans. More and more we are seeing hate speech and xenophobia being pushed in our social media, and it’s only a matter of time before we have to confront the possible results of that.

But can we claim to be able to determine who is affected, and how much? Do you have to be receptive to being groomed as a racist or a misogynist, for example, and how much of that receptivity is the fault of society at large, rather than your internet bubble?

In 1984, the parents of a 19-year-old man who committed suicide sued Ozzy Osbourne, one-time lead singer for metal group Black Sabbath and ersatz “lord of darkness”, alleging that the lyrics to his song Suicide Solution caused their son to shoot and kill himself.

Of his son, the father said — according to Ultimate Classic Rock, which is not a publication I thought I’d ever be quoting in the FM — “he’s a perfectly normal kid there, who really doesn’t show any signs of any depression at all, and happy, and all of a sudden, six hours later, he’s dead. No one [could] explain it, the only thing we know is he was listening to this music.”

The case was dismissed, the court declaring that the suicide was not a foreseeable result of listening to Osbourne’s song. The singer himself said: “The boy must have been pretty messed up before he ever heard an Ozzy record. I mean, I can’t help that, you know? I feel very sad for the boy and I felt terribly sad for the parents. As a parent myself, I’d be pretty devastated if something like that happened. And I have thought about this, if the boot was on the other foot, I couldn’t blame the artist.”

While that decision seems right, in terms of cause and effect, you’d have to say that consuming certain types of cultural artefacts must have an  impact on how people think about the world. But if you’ve made a choice to consume a particular piece of art, you’re still in charge of your own destiny.

The difference with social media, though, seems to be that in many cases you haven’t actually made the choice. One of the allegations in the Buffalo shooting case was that “the platforms use algorithms that are ‘designed to addict young users’ who are more vulnerable to social media addiction and racist and antisemitic views and conspiracy theories online”.

How the case plays out will have ramifications for how we can hold social media companies accountable. It should also give us cause to consider our local situation. Have we created a society that is vulnerable to algorithms of hate, and are we going to see increasing violence based on social media indoctrination? And the most difficult question of all: is there anything we can do about it?

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon