Important Things At Twitter Keep Breaking, And Making The Site More Dangerous

 
It turns out that if you fire basically all of the competent trust & safety people at your website, you end up with a site that is neither trustworthy, nor safe. We’ve spent months covering ways in which you cannot trust anything from Twitter or Elon Musk, and there have been some indications of real safety problems on the site, but it’s been getting worse lately, with two somewhat terrifying stories that show just how unsafe the site has become, and how risky it is to rely on Twitter for anything.
First, former head of trust & safety at Twitter, Yoel Roth, a few weeks after he quit, said that “if protected tweets stop working, run.” Basically, when core security features break down, it’s time to get the hell out of there.
Protected tweets do still seem to kinda be working, but a related feature, Twitter’s “circles,” which lets you tweet to just a smaller audience, broke. Back in February, some people noticed that it was “glitching,” in ways that were concerning, including a few reports that some things that were supposedly posted to a circle were viewable publicly, but there weren’t many details. However, in early April, such reports became widespread, with further reports of nude imagery that people thought was being shared privately among a smaller group being available publicly.
Twitter said nothing for a while, before finally admitting earlier this month that there was a “security incident” that may have exposed some of those supposed-to-be-private tweets, though it appears to have only sent that admission to some users via email, rather than publicly commenting on it.
The second incident is perhaps a lot more concerning. Last week, some users discovered that Twitter’s search autocomplete was recommending… um… absolutely horrific stuff, including potential child sexual abuse material and animal torture videos. As an NBC report by Ben Collins notes, Twitter used to have tools that stopped search from recommending such awful things, but it looks like someone at Twitter 2.0 just turned off that feature, enabling anyone to get recommended animal torture.

Yoel Roth, Twitter’s former head of trust and safety, told NBC News that he believes the company likely dismantled a series of safeguards meant to stop these kinds of autocomplete problems.
Roth explained that autocompleted search results on Twitter were internally known as “type-ahead search” and that the company had built a system to prevent illegal, illicit and dangerous content from appearing as autocompleting suggestions.
“There is an extensive, well-built and maintained list of things that filtered type-ahead search, and a lot of it was constructed with wildcards and regular expressions,” Roth said.
Roth said there was a several-step process to prevent gore and death videos from appearing in autocompleted search suggestions. The process was a combination of automatic and human moderation, which flagged animal cruelty and violent videos before they began to appear automatically in search results.
“Type-ahead search was really not easy to break. These are longstanding systems with multiple layers of redundancy,” said Roth. “If it just stops working, it almost defies probability.”

In other words, this isn’t something that just “breaks.” It’s something that someone had to go in and actively go through multiple steps to turn off.
After news of this started to get attention, Twitter responded by… turning off autocomplete entirely. Which, I guess, is better than leaving up the other version.
But, still, this is why you have a trust & safety team who works through this stuff to keep your site safe. It’s not just content moderation, as there’s a lot more to it than that. But Twitter 2.0 seems to have burned to the ground a ton of institutional knowledge and is just winging it. If that means recommending CSAM and animal torture videos, well, I guess that’s just the kind of site Twitter wants to be.

https://www.techdirt.com/2023/05/17/important-things-at-twitter-keep-breaking-and-making-the-site-more-dangerous/

- Any text modified or added by CorruptionLedger is highlighted in blue.

- [...] These characters indicate content was shortened. This is used for removing unnecessary/biased/flowery language. Example: The oppressive government imposed a curfew becomes: The [...] government imposed a curfew.