In November, venerable tech outlet CNET began publishing articles generated by artificial intelligence, on topics such as personal finance, that proved to be riddled with errors. Today the human members of its editorial staff have unionized, calling on their bosses to provide better conditions for workers and more transparency and accountability around the use of AI.
“In this time of instability, our diverse content teams need industry-standard job protections, fair compensation, editorial independence, and a voice in the decisionmaking process, especially as automated technology threatens our jobs and reputations,” reads the mission statement of the CNET Media Workers Union, whose more than 100 members include writers, editors, video producers, and other content creators.
While the organizing effort started before CNET management began its AI rollout, its employees could become one of the first unions to force its bosses to set guardrails around the use of content produced by generative AI services like ChatGPT. Any agreement struck with CNET’s parent company, Red Ventures, could help set a precedent for how companies approach the technology. Multiple digital media outlets have recently slashed staff, with some like BuzzFeed and Sports Illustrated at the same time embracing AI-generated content. Red Ventures did not immediately respond to a request for comment.
In Hollywood, AI-generated writing has prompted a worker uprising. Striking screenwriters want studios to agree to prohibit AI authorship and to never ask writers to adapt AI-generated scripts. The Alliance of Motion Picture and Television Producers rejected that proposal, instead offering to hold annual meetings to discuss technological advancements. The screenwriters and CNET’s staff are both represented by the Writers Guild of America.
While CNET bills itself as “your guide to a better future,” the 30-year-old publication late last year stumbled clumsily into the new world of generative AI that can create text or images. In January, the science and tech website Futurism revealed that in November, CNET had quietly started publishing AI-authored explainers such as “What Is Zelle and How Does it Work?” The stories ran under the byline “CNET Money Staff,” and readers had to hover their cursor over it to learn that the articles had been written “using automation technology.”
A torrent of embarrassing disclosures followed. The Verge reported that more than half of the AI-generated stories contained factual errors, leading CNET to issue sometimes lengthy corrections on 41 out of its 77 bot-written articles. The tool that editors used also appeared to have plagiarized work from competing news outlets, as generative AI is wont to do.
Then-editor-in-chief Connie Guglielmo later wrote that a plagiarism-detection tool had been misused or failed and that the site was developing additional checks. One former staffer demanded that her byline be excised from the site, concerned that AI would be used to update her stories in an effort to lure more traffic from Google search results.
In response to the negative attention to CNET’s AI project, Guglielmo published an article saying that the outlet had been testing an “internally designed AI engine” and that “AI engines, like humans, make mistakes.” Nonetheless, she vowed to make some changes to the site’s disclosure and citation policies and forge ahead with its experiment in robot authorship. In March, she stepped down from her role as editor in chief and now heads up the outlet’s AI edit strategy.