Six months after the impact with the launch of the test version of ChatGPT, Wan-Ifra (World Association of News Editors) found that almost half of newsrooms around the world are already working with intelligence artificial generative.
101 editors, reporters and other professionals were interviewed, in a work carried out by the Schickler consultancy, and 49% answered “Yes” to the question “Is your newsroom actively working with generative AI tools such as ChatGPT?”. On the other hand, only 20% of newsrooms have adopted guidelines for use.
Respondents were more concerned about inaccurate information and poor content quality (85%) as a negative effect of AI adoption in journalism. Plagiarism or copyright infringement came next (65%). Threat to employment was limited to 38%.
The main uses so far have been text creation (54%), but in a limited way, without generating its own content; streamlined search (44%), corrected text and improved workflows (43%). Content creation and translation come a little lower (32%).
Research and content creation, in relatively high percentages, were results of the survey that worried professionals after the release by Wan-Ifra, due to reports in recent months that ChatCPT sometimes invents information.
The vast majority of respondents approved of the adoption, with 70% saying they project the tools will be useful to professionals. In the evaluation of Dean Roper, director of Wan-Ifra, “the results show a picture of optimism and some skepticism and of experimentation in movement”.
The organization is going to address AI at the World Journalism Congress to be held in Taipei in three weeks, with editors from newspapers such as The New York Times and South China Morning Post, from Hong Kong. The event is organized by Lianhe Bao (United Daily News), the newspaper of the largest private media group in Taiwan.
Another global journalistic group, Inma (International News Media Association), which brings together around 900 vehicles, held its annual congress last week in New York, USA. The first experiments carried out with generative AI also focused the discussions.
NYT publisher AG Sulzberger was among the most critical, saying that artificial intelligence “will almost certainly start an unprecedented torrent of crap — to use the scientific word — into the information ecosystem, poisoning it and leaving people completely confused”.
He cited “deep fake” videos already circulating in the US presidential pre-campaign to emphasize that “the ecosystem is about to get much, much worse”, which could even favor traditional journalism, due to the confidence it offers by adopting “processes that distinguish if things are real”.
Sulzberger and Fred Ryan of the Washington Post said they had set up committees to consider what to do. Joining the discussions were executives from various quarters, including Catherine So of the SCMP and Praveen Someshwar of the Hindustan Times, the latter hailing that AI “unlocks the capabilities of our journalists”.
At the same time, publishers of leading publications have made their plans for AI public. Roula Khalaf of the Financial Times, more echoingly, wrote that this will increase productivity and “free up reporters and editors to focus on generating and reporting original content”.
At Reuters, editor-in-chief Alessandra Galloni and ethics editor Alix Freedman set out some “pillars”, the first being that the agency will embrace new technology. The second, that “reporters and editors will be responsible for greenlighting any content that relies on AI.”
In contrast to the attention that ChatGPT gains in newsrooms, the Tow Center, from the main US journalism school, Columbia, produced a study giving numbers to the “hype” that the same newsrooms have adopted since November, which repeats what was seen with cryptocurrencies, virtual reality and other waves.
He advocated more nuance and context and less “hyperbolic headline”. He demanded that newsrooms adopt guidelines for coverage, specifically suggesting to stop the anthropomorphization of AI, which has been spreading through statements in which ChatGPT and competitors supposedly say, for example, “I want to live”.