If Twitter’s latest financial reports are to be believed, the company’s prognosis is good.
The latest reports released show $909 million in revenue and a $225 million net profit, a surge in profitability that is being attributed in part to a 16% decrease in abuse reports. This measure is an essential one to gauging Twitter’s “platform health” metric. The company has pledged to focus on this measure in the new year, cracking down on abusive accounts and pledging to “reduce the burden on victims of abuse” to report their experiences, as well as “taking action before abuse is reported.”
To that last goal, however, a University of Iowa study has identified major deficits in Twitter’s ability to effectively take action in a timely manner. Computer science professor Zubair Shafiq, along with graduate student Shehroze Farooqi, built a tool to automate and identify the causes of problematic tweets, Engadget reported. Although many of Twitter’s public efforts to improve platform etiquette and public discourse has centered around removing accounts created by bad-faith actors, Shafiq and Farooqi found vulnerabilities in a different method of account abuse: scamming and spamming done through third parties using the platform’s API.
How big of a problem did they uncover? By one count the pair took, 167,000 apps accessing the API have been used to “spread disinformation, spam, and malware.”
In one of the study’s most startling findings, Shafiq and Farooqi were able to identify an account’s potential to be abusive on a large scale, from its first seven tweets. By comparison, Twitter’s standard protocol looks into an account’s patterns of abuse after it has tweeted 100 times. “[A]ll sorts of nefarious activity remain undetected by Twitter’s fraud-detection algorithms, sometimes for months, and they do a lot of damage before Twitter eventually finds and removes them,” Shafiq said specifically of the abuse that was being amplified by spam accounts.
As you might expect, Twitter takes issue with the findings. “Research based solely on publicly available information about accounts and tweets on Twitter often cannot paint an accurate or complete picture of the steps we take to enforce our developer policies,” a spokesperson shared with WIRED when confronted with the findings. And while the company has looked deeper into the apps with access to its API and is purging those using it with malicious intent, the University of Iowa pair insists it isn’t enough. The process takes too long, is too shortsighted, and victims of abuse are left with little protection.
For many of these victims, abandoning their accounts could be a more attractive option than waiting for Twitter to help defuse the situation. This departure may be why Twitter often shares massaged stats like monetizable daily active users (mDAU), which reportedly rose from 124 million to 127 million from Q3 to Q4 of 2018. Comparatively, monthly active users, a more representative number of who’s using Twitter on a regular basis, dropped from 336 million to 321 million in the same time frame.
The platform gives itself a clean bill of health; an independent observer says it’s sicker than we might expect. Who’s correct? While the platform has come a long way from its “bottom” in 2016, when advertisers were steering far away from Twitter’s reported toxicity, it still has some healing to do. With any luck, this latest wake-up call will galvanize the platform to action in continued pursuit of a “healthier” online space.
Join 100,000+ fellow marketers who advance their skills and knowledge by subscribing to our weekly newsletter.