This is how ChatGPT was ‘perfected’: OpenAI employed Kenyan workers for $2 an hour to tag child sexual abuse, bestiality and other horrible content

For months, the artificial intelligence company Sama, based in San Francisco, USA, worked with OpenAIthe company behind ChatGPT, to identify and tag sensitive images and text.

This is data that is then used to train ChatGPT so that the robot can then give responses free of toxicity, according to has reported Time in an investigation.

In February 2022, however, Sama ended its association with OpenAI after discovering that the AI ​​firm had allegedly requested and received 1,400 images of potentially illegal content including child sexual abuse, bestiality, rape, among other forms of violence for an unrelated AI training project ChatGPTaccording to internal documents reviewed by Time.

OpenAI has also confirmed that it used Kenyan workers to help create a tool that tags problematic content, according to a statement to Time.

Essentially, to train the AI ​​to recognize and remove this type of content you need a database that tags it, and that’s part of what Sama’s contractors were tasked with.

How will OpenAI, the company behind ChatGPT, change our lives? Its CEO points to videos by AI and important transformations in society

According to contracts signed by the company, the company commissioned data taggers that were subcontracted from Kenya to cataloged from their respective teams texts that had to do with sexual abuse, incitement to hatred and violenceaccording to internal documents obtained by Time.

Depending on their seniority and level of productivity, employees were paid between $1.32 and $2 an hour to go through loads of graphic content, according to 4 Sama employees who have spoken with Time under anonymity.

OpenAI and Sama have not responded to a request for comment from Business Insider.

“Our mission is to ensure that artificial general intelligence benefits all of humanity, and we work hard to build safe and useful AI systems that limit bias and harmful content,” OpenAI said in a statement collected by Time.

And he added: “Classify and filter [textos e imágenes] harmful content is a necessary step to minimize the amount of violent and sexual content included in the training data and build tools that can detect harmful content.”

Despite its theoretically positive purpose, the nature of the work itself has caused serious anxiety problems for some data labellers, according to information from Time.

The 1,000 million dollars best invested in history: this could be Microsoft’s commitment to OpenAI, the developer of ChatGPT

Specifically, an employee He described his work as “torture” after being assigned to read a passage about a man performing a sexual act with a dog in the presence of a child, an experience so traumatic that it caused recurring visions, he told Time.

Some data labelers say they were not provided with clear guidelines on how to classify the content they were reviewing on certain occasions, reports Time.

For example, one of them was apparently commissioned to read a risqué story in which Batman’s partner Robin is raped, but he wasn’t sure if he should qualify it as sexual violence because Robin, according to the story, had just corresponding to sexual acts.

Sama declares to Time It offers individual mental health counseling and wellness programs to employees to improve their situation.

Contract workers, on the other hand, have long complained about the burden of removing toxic content from technology systems.

The research findings of Time They come at a time when many companies that have adapted AI technology to improve their services and business processes continue to outsource low-paid employees for content moderation tasks outside the US. Some contractors report negative effects on their physical and mental health.

Companies like Amazon, for example, have hired video reviewers in India and Costa Rica to review thousands of hours of footage, resulting in physical ailments such as headaches and sore eyes, has reported The Verge.

In 2019, after some Facebook contractors said they suffered from post-traumatic stress disorder because of their moderation work, CEO Mark Zuckerberg called the reports “a bit exaggerated“.

ChatGPT and generative AI seem to be the next tech boom, but it could end up being a new bubble

Almost a year after ending his collaboration with OpenAI, Sama, who has also offered data labeling services to Google and Microsoft, has told Time that will end all work related to graphic content this coming Marchwhich includes a $3.9 million contract with Facebook.

“After numerous discussions with our global team, Sama has made the strategic decision to exit all [el procesamiento de lenguaje natural y el trabajo de moderación de contenido] to focus on computer vision data annotation solutions,” Sarma explained in a statement.

We would like to say thanks to the writer of this article for this outstanding web content

This is how ChatGPT was ‘perfected’: OpenAI employed Kenyan workers for $2 an hour to tag child sexual abuse, bestiality and other horrible content