Direct messages are the mostly unseen side of social media. They’re private, individualized and personal — and a new study found they’re also often filled with hate and abuse that goes unchecked.
The Center for Countering Digital Hate (CCDH) combed through thousands of direct messages sent on Instagram to women with high-profile accounts and found that not only did Instagram make it hard in some cases to report the abuse, but that Instagram also failed to act on 90% of researchers’ incident reports.
"This report has one of the worst-ever failure rates of our reports," CCDH CEO and founder Imran Ahmed said, adding that the nonprofit has conducted several other Failure to Act reports over several years.
On Instagram, all users have the ability to report a DM, which can include text, photos, videos and/or audio messages.
The report must include a reason, which can vary from lesser evils such as spam or "I just don’t like it," to more serious — possibly even criminal — offenses such as hate speech or sexual activity. Instagram then reviews the report.
During CCDH’s study, the DMs of five women Instagrammers with a total of nearly 5 million followers were looked at, including those of actor Amber Heard.
After looking through nearly 9,000 DMs, CCDH found that one in 15 broke Instagram’s rules on abuse and harassment.
"For women in the public eye, receiving a constant stream of rude, inappropriate and even abusive messages to your DMs is unfortunately inevitable, and the fact that this happens away from public view makes it all the more intrusive," said Rachel Riley, a broadcaster and CCDH ambassador whose DMs were part of the research.
"It worries me that younger and more vulnerable women and girls can be exposed to huge amounts of abuse without anyone knowing," she continued.
The difficulty of policing abuse in Instagram DMs is something the company has acknowledged, saying its technology can’t proactively detect hateful or abusive content since the conversations are private. Because of that, Instagram announced several changes last year to "tackle abuse on Instagram."
In February, stricter penalties for people who send abusive DMs were announced, and in April a "hidden words" feature that can filter out abusive messages was added. Instagram also said it made it harder for someone with an already-blocked account to contact a user again through a new account.
The stricter penalty for sending abusive DMs included disabling the account, whereas Instagram was previously prohibiting the account from sending messages for a set period of time.
CCDH said their study found Instagram allowed 9 in 10 users who sent violent threats to remain active after the DMs were reported, going against Instagram’s commitment to disabling abusive accounts.
"Meta, in its continued negligence and disregard for the people using its platforms whilst churning record profits, has created an environment where abuse and harmful content is allowed to thrive," Ahmed wrote.
Instagram’s parent company Meta, formerly known as Facebook, has been under fire this past year after former employee and whistleblower Frances Haugen testified before Congress that the company's systems amplify online hate and extremism and fail to protect young users from harmful content.
Her testimony, alongside thousands of pages of internal documents, condemned the tech company as one that continues to put profits over safety — a sentiment that Ahmed and CCDH agree with.
"There is an epidemic of misogynist abuse taking place in women’s DMs. Meta and Instagram must put the rights of women before profit," Ahmed said.
Both Haugen and CCDH are also calling for lawmakers to "act now" on implementing stricter social media laws.
CCDH made recommendations after finding several "systematic problems" within Instagram’s DM function that made it hard in some cases to report the problematic DMs in the first place.
The problems noted by the CCDH include:
- Users being unable to report any abusive voice notes that accounts have sent via DM
- In order to report messages sent in "vanish mode", users are forced to view them
- Instagram’s "hidden words" feature is ineffective at hiding abuse from users
- Users can face difficulties downloading evidence of abusive messages
The recommendations CCDH made to address these issues point specifically to the technical aspects of the platform and general process and safety features.
"Given the current unresponsiveness to the problems raised and failure to self-regulate, public / political pressure and the use of statutory powers by regulators will be needed to ensure that these are implemented," CCDH wrote.
The CCDH is an international nonprofit that "seeks to disrupt the architecture of online hate and misinformation." It has offices in London and Washington, D.C.
This story was reported from Detroit.