Investigative journalists and AI: true friends?🕵🏻
it’s an exciting time to get into these technologies right now because there
New year, new format!
Starting with this issue, MyType newsletter will feature interviews with experts in Artificial Intelligence related to journalism and the world of communication.
The subtitle of this newsletter is: AI and journalism, opportunity, or threat? To answer this question, we decided to ask experts in the field. So, what is the first answer you would give to this question?
Well, I think they are both right. If you’re familiar with Noam Chomsky maybe you know he said that ‘technology is usually fairly neutral. It’s like a hammer’. It can be both used to put something in the wall, but you could also use it to murder someone. Technology is a tool in that regard, and it can well be used to your benefits, but also to your disadvantage.
On the opportunity side, there are a lot of quick wins. For example, there are a lot of specific processes, like transcription, summarization and other similar ones, that can be helpful. So, it is mostly about efficiency.
However, in some cases it can also be seen as a threat.
I specifically see three challenges for journalism. One is the lack of AI literacy. Journalists sometimes don’t know what specific technologies can do, but also what they cannot do and that helps to demystify these technologies.
The other one is authenticity. It is a challenge in the sense that now with very few resources, you can just generate newspaper articles, or you can also generate images. Now, for example, with Google Gemini, which will be released as of next year, you can even generate a text and then a video in just a matter of clicks. I think that is a real challenge for journalism because we will no longer have the monopoly of having the “true facts”. And it’s also increasingly hard, I think, to verify if an image is real or not.
Lastly, the third one is about the content paradox. This goes back to what I said that it’s easier to write an article but without having a lot of knowledge. So, as a news organization, you can reach the point of generating a lot of articles and personalizing them a lot, but this doesn’t necessarily mean there’s value in them and that the audiences you’re now reaching are interested in them. Overall, I see more challenges, I would say, than opportunities. But there are, of course, also a lot of opportunities.
What are some best practices that newsroom and the technology companies should adopt to implement their conscious use of AI in the newsrooms?
It’s quite a hard question. Best practices go back to what I said about the lack of AI literacy, they are not organizing workshops, or even having discussions on these technologies.
I gave a training to journalists, and you would think that they would have already tried or experimented with, for example, ChatGPT but they had not, and they still needed to make an account. They were also a bit fearful of the technology and these kinds of things. Sometimes that OpenAI and other tech companies are also quite vocal about the limitations of these technologies. This also helps to well spark a discussion not only in newsrooms but outside as well.
What evolution do you envision in the relationship between AI and journalism in the short, medium, and long term?
I think that looking into the future is very hard. But of course, there are some things that I’m seeing at the horizon that reflect the future path and impact of AI, more in the long term, specifically for journalists.
I think what will become more important is what I’ve called in my PhD ‘shared decision making’. In fact, technology will become more well autonomous in helping people while these people will still oversight so to have more automation than there is right now. But these technologies are getting more in charge of decisions than us on a daily basis.
Another important aspect of possible future impact and how it will also affect journalism is regulation. The AI Act that is now still being negotiated, they are saying that if you are using AI and GenAI, you should be explicit to your audience. So that will also influence a potential impact of AI on journalism. It also depends in what way the technology will evolve. Now you see GPT-4 is implemented. It took almost two years to train GPT-3, then it took almost two years to train GPT-4. I’m not sure when GPT-5 will arrive, that could be in two years or could maybe be sooner or way later. Therefore, I think this kind of aspects will also determine how it will further impact journalism. Of course, right now you are seeing the European Union being really keen on imposing guardrails, but the specificity of these guardrails also have a huge impact.
I know one example that actually happened in Belgium: a journalist who used ChatGPT in a newspaper article and there were more than five factual errors in that article. It was about someone who died, an ‘in memoriam’ article. The children of the person contacted the newspaper saying there were a lot of factual errors in there. However, when the next day the article was published, there was a correction in the physical newspaper but there wasn’t any word on the fact that ChatGPT was used. In that sense, I believe that whole framework of how well ChatGPT or, even broader, just generative AI tools in general are being implemented in newsrooms is very important. Some of the guidelines that a lot of these news organizations have put forward are very explicit; others less so.
Hence, I think that the impact of AI will also depend on how well the use of AI will or will not flourish in that sense, especially depending on these guardrails.
In your experience, what are the main case studies that you are working on or maybe studied with the news organization?
First one, some days ago, I wrapped up research at the BBC. They’re working on personalization. So not so much on generative AI, but they were more working on news recommender systems and using AI to personalize content for specific users. At BBC, they have a platform, the iPlayer. Is a sort of Netflix but for the public broadcaster and what they want to do is to move all their content in there. They want to combine it now with the news component. The news component is of course very specific in that sense that it needs to be diverse. So, there is a very specific way of personalizing content in that iPlayer and I’m looking at how the editorial values fit into the algorithm since the algorithm is being well trained by engineers who do not ‘speak journalism’. And then there are the journalists who step in for a second check to verify if the content is diverse enough, if it is in line with BBC values. And it is like this two-step process where editorial tries to influence decisions that then affect the recommendations system and vice versa. This is done through observations and interviews and now I’m looking at how they are negotiating that recommended system.
Another case study that I did was with a Danish news organization. That case study was more on the perception of AI because that specific news organization was not really working, and still isn’t, working with AI. They are a more so “conservative” news organization in Copenhagen as well as one of the oldest newspapers in the world. And they were very interested in looking into where they should invest resources to implement AI. Therefore, that was more investigating perceptions, maybe also demystifying the technology. Those are two case studies that I’ve been working on.
Lorem ipsum dolor sit amet consectetur. Aliquam malesuada tempor ipsum a.
it’s an exciting time to get into these technologies right now because there
We open this cycle with an interview with Hannes Cools, Assistant Professor in
What does your newsroom need, for real? When we find ourselves designing digital
Book a demo now to discover the perfect partner for your news & media business. Start using all-in-one platform designed for journalists by the journalists.
An AI-driven content management system to run your newsroom efficiently. Manage your newsroom workflows with an intuitive technology like never before!