Experts Discuss Ethical Use of AI in Media

How can artificial intelligence (AI) be ethically developed and deployed in the media while safeguarding public values? This question was at the heart of a recent gathering of four leading AI-media research labs. The event, titled "Media and AI: Current Challenges and Future Perspectives," was hosted by Research Centre Digital Business and Media.

The integration of AI into media brings forth various ethical concerns. Autonomous AI systems can lead to a loss of human control over content and distribution. If trained on non-representative datasets, these systems may perpetuate stereotypes or even discriminate against certain groups. Public trust in media could diminish if AI systems are perceived as unreliable or manipulative.

"Because there is still so little regulation, risk framing currently dominates the debate on AI", stated Natali Helberger, Distinguished University Professor of Law & Digital Technology at the University of Amsterdam. "But AI is not an autonomous force. The discussion about AI should therefore also be about power. Who has the power to impose risks on the majority for the benefit of a minority?"

Harnessing AI for good

Helberger emphasised that the power to define and investigate AI risks doesn't lie solely with tech companies. Media organisations and researchers also hold significant influence. "Media organisations must use their power to define what acceptable risks are. We cannot leave risk assessment solely to the major American tech companies. We must not only ask: where are we going, but also: where do we want to go? AI is not just about risks. Ultimately, the question should be: how can we use generative AI to consciously do good?"

Building Public Trust

For a positive approach to AI, public trust is essential. However, trust in media is declining, as noted by the European Trust Alliance (THETA). People increasingly trust social media over traditional journalism. In some cases, such as gas extraction issues, informed citizens have become experts themselves. To rebuild necessary trust in media, knowledge institutions, the market, government, and citizens must collaborate more effectively, according to THETA.

Exploring New Research Models

AI can also offer opportunities for deeper understanding. Renée van der Nat, senior researcher at HU's lectorate Quality Journalism in Digital Transition, demonstrated how a Large Language Model (LLM) guided by scientific prompts enables respondents to provide open-ended answers. The LLM-driven chatbot adapts its follow-up questions based on previous responses. While processing the results requires more attention, the answers provide much more depth than can be achieved with a pre-set, fixed questionnaire, according to Van der Nat.

A Collaborative Effort

The event underscored the importance of keeping the societal impact of AI high on the agenda of media companies. Media have the responsibility to apply AI in a transparent, socially responsible manner while safeguarding public values. They must also educate users on how to assess news critically and recognise misinformation. This could involve developing platforms similar to Bellingcat or Wikipedia. Additionally, media organisations need to develop alternatives for users to reduce dependence on a single tech company.

"The results of the meeting make us hopeful", said Frank Visser, programme manager at the Research Centre Digital Business and Media. "In the developments for the use of AI within Dutch media, there is intensive consideration of European values such as privacy and security. A serious group of people, including those from HU, is working on this. The meeting has made it clear that collaboration between knowledge institutions, media organisations, and the government is essential to integrate AI responsibly into the media."

This article is based on the original Dutch publication by Mariek Hilhorst on HU.nl/nieuws.

social.share.article