When Can GenAI be ethically used in News?

Dr David Dunkley Gyimah
4 min readOct 14, 2024

--

The audio you’re about to hear is real. It’s from a night time drive with a Peace Keeping Force in South Africa before its first all-race democratic elections. Back then in 1994 the location was a township, Katlehong, rife with fatal political clashes.

We were made to sign ‘death warrants absolving the peace keeping force of any wrong doing, should things go south. Today, Katlehong has transformed.

Before that audio, some context with GenAI.

Firstly, the inevitable. GenAI will find a way into news production, even if that refers to its hybrid usage. From speaking to senior execs, it’s being pushed hard by tech companies lobbying broadcasters. Thus far guidelines have been unveiled by the recently formed Archival Producers Alliance indicating how GenAI could be adopted by doc makers. News broadcasters could be next. Also ITV is looking for an Gen AI specialist to help with its programme making.

Tech adoption trends over the years provide some evidence of direction. There is always an initial resistance until its ethical use can be ascertained. Back in 1994 I was one of group — the first ever official NUJ recognised videojournalists in the UK.

Broadcasters felt it should not be touched. The very idea that one person controlled the production flow process, instead of 3/4 people was rejected. CNN, AP, ITV adopted it, before in 2001 the BBC followed labelling videojournalism under another name.

As usual, taking a cue from Carolyn Marvin’s ‘When Old Technologies Were New’, a small group of people will seek to shape a technology’s use.

In GenAI’s case, some caution. It’s indisputable now that news, a social construct, relies on a diversity of thoughts and views for its objectivity and impartiality. That same diversity is critical for how GenAI should be deployed, otherwise it will be harmful to groups.

Remember GenAI is patterned automated schemas derived from its data scrapping and training. The less diverse the sources and training, the more onerous its output to different groups.

Note too, what happens if a producer can’t create in-camera footage, or access archive because it doesn’t exists? This was the impetus for me to create ‘Empowered’ to be shown at the first AI-filmmakers festival in the US. It’s a story that is highly relevant in US politics, made before the turning of events.

And ‘Chairman, the Ghanaian’, which I had the opportunity to present to the British Screen Forum and Channel 4 Board of Directors.

I’m currently writing about AI-filmmaking and there’s a social pattern, in its use, that’s emerging amongst Black and Brown creatives.

Secondly, its use should be safeguarded by social indexes and high degrees of transparency. If you can’t verify the producer almost instantaneously via social proofing, it erodes trust.

The next generation of online thus should thus be indexed to actual birth/adopted names/backgrounds, and include readily available verification, which include the relationship between the producer and content. I saw a semblance of this being created when I was one of a small team of reviewers for Google’s EU News initiative fund.

Thirdly, a recent caveat for anyone interested in AI. This most recent work from AI researchers reported by one of the critical voices on AI Gary Marcus’ “LLMs don’t do formal reasoning — and that is a HUGE problem”.

This quote should alarm many, pulled from the researcher’s senior author of the study Mehrdad Farajtabar at @Apple, and previously ex-@DeepMind, ex-@GeorgiaTech

“we found no evidence of formal reasoning in language models …. Their behavior is better explained by sophisticated pattern matching — so fragile, in fact, that changing names can alter results by ~10%!”

This has consequences beyond GenAI in all fields of AI and error introductions. Thus even with its use we should be cautious.

Fourthly and to the question of GenAI. Clearly tagged and labeled, its use may well first find itself in archival sources, where archive is absent, and in reconstructions and memorisation.

In BBC Two’s feature on Salman Rushdie ‘Through a Glass Darkly (2024)’, Rushdie recounts the shocking attack on him at a literary event. In trying to understand his assailant’s motives he constructs a scene — a dialogue between him and his attacker — using GenAI.

It’s fictional, but is drawn from research of the man’s past. It emerges through memorisation, delivered as artistic representation. It works because you trust the author in seeking to be transparent and be truthful. His social index is apparent, through the length of the story told for those unaware of the author’s past.

Trust therefore becomes a real issue, and as AI-films proliferate Trust quotients, mapped out earlier, will become important. This here is one such Trust quotient I made looking back on my career.

The audio below is real, broadcast on the BBC WS; the images come from my memory.

Crucially in its adoption, ethics and the technology and transparency and trust to determine its use should be front of centre.

End note

I’m currently reading Taming Silicon Valley by Gary Marcus, having finished Code Dependent by Madhumita Murgia. Gen AI is not all plain sailing.

--

--

Dr David Dunkley Gyimah
Dr David Dunkley Gyimah

Written by Dr David Dunkley Gyimah

Creative Technologist & Associate Professor. International Award Winner Cinema journalist. Ex BBC/C4News. Apple profiled Top Writer,

No responses yet