A student project may help settle the legal mess between Twitter and Elon Musk

A student project is being key in the development of the demand for Twitter against Elon Musk for the failed purchase of the social network.

Elon Musk accused Twitter of misrepresenting the numbers of fake accounts on the platform and, for this, used the bot detection software ‘Botometer‘ by doctoral student Kaicheng Yang.

The bots, the center of the discussion

According to legal documents, Botometera free tool that claims it can identify the probability that an account Twitter be a bot, he’s been instrumental in helping the Musk team prove that there aren’t just 5% fake accounts on the platform.

Contrary to representations of Twitter that his business was minimally affected by fake accounts or spam, preliminary estimates by the Musk parties show otherwise,” the countersuit says.

But differentiating between humans and bot is more difficult than it seems, and researchers accused Botometer of “pseudoscience” for making it sound easy. Twitter was quick to point out that Musk used a tool with a history of making mistakes.

In its legal filings, the platform reminded the court that Botometer defined Musk himself as likely to be a bot earlier this year.

As a result, not only Musk and Twitter will be judged in October, but also the science behind the detection of bot.

How does Botometer work?

This software has been running for 8 years and its creators are no longer the same: they inherited it from Yang at university.

Botometer is a supervised machine learning tool, which means that it has been taught to separate the bot of humans alone. Yang tells wired that Botometer difference to the bot of humans by looking at more than 1,000 details associated with a single email account Twittersuch as their name, profile picture, followers, and tweet-to-retweet ratio, before giving it a score of zero to five.

However, the most important thing is that Botometer does not give users a threshold, a definitive number that defines all accounts with higher scores as bot. Yang says the tool should not be used at all to decide whether individual accounts or groups of accounts are bot. He prefers that it be used comparatively to understand if one topic of conversation is more bot-polluted than another.

We recommend you METADATA, RPP’s tech podcast. News, analysis, reviews, recommendations and everything you need to know about the technological world.

We would like to give thanks to the author of this write-up for this amazing web content

A student project may help settle the legal mess between Twitter and Elon Musk