AFE
12 May 2025

10: Regulation of Artificial Intelligence

The regulation of Artificial Intelligence is a highly contested political area with competing interests. There are those of tech companies that want minimal or no regulation. Some governments are intent on minimising harmful effects on the safety of citizens or on the country’s economic competitiveness; others are focused on exploiting AI for geopolitical dominance and/or surveillance and information operations at home and abroad. And in between, there are citizens and activists who push for the protection of human rights and for an end to data theft without consent or compensation.

There is also a kind of global power struggle – the European Union focussing on risk-management, safety and rights, the US on the free and unbridled market Iin which their companies dominate) and China that uses AI for state control and economic growth.

EUROPE'S AI ACT

The European Union was the first to come up with a full set of legal provisions which came into force in 2025. The Artificial Intelligence Act uses a risk-based approach to regulation and defines four levels of risk – with regulatory measures to be graded accordingly:

At the top of the pyramid are tools that carry an unacceptable risk and are banned. Among them are systems which classify the trustworthiness of persons based on their social behaviour and personality characteristics. One example is China’s social credit system which awards points for ‘good’ and ‘acceptable’ behaviour to individuals or businesses. Another banned AI service is remote biometric identification systems installed in public places for facial recognition (also used in China and with unregulated take-up in parts of Africa). In the DSA, the only exceptions allowed are in cases of a terrorist threat, or the search for dangerous criminals or missing children. 

The general ban on AI facial recognition systems is a sensible decision because they have repeatedly shown significant bias and reduced accuracy when identifying individuals with darker skin tones, especially Black women. The reason is that many such models are trained on datasets that are disproportionately made up of lighter-skinned faces, especially white males. This can lead to the misidentification of people with darker skin and wrongful arrests, as has already happened in the U.S. and elsewhere. 

AI systems classified as high-risk are those that may have a significant harmful impact on the health, safety and fundamental rights of persons. They include AI technology used in critical infrastructure, like transport or public utilities, and applications like AI-assisted surgery that could put the life and health of patients at risk. Other cases of high risk are the scoring of exams or the verification of the authenticity of travel documents. 

CV-sorting software for recruitment purposes is also identified as high-risk because AI systems operate on information fed into them from the past and on that basis establish information as correct for the future. For example, Amazon developed an AI tool for staff recruitment that assesses job applications. It turned out that applications from men routinely received better automated AI appraisals than women, simply because the tool had learnt that in the past more men were employed and so the algorithm assumed that men were to be preferred.

Limited-risk systems are, for example, chatbots. Video games or spam filters are categorized as minimal-risk AI applications.

Regulation of chatbots

In regard to chatbots the EU Act focusses on training of the systems, as well as their compliance with EU copyright and transparency rules.

Companies that run chatbots within the EU’s jurisdiction have to draw up and keep up-to-date the technical documentation of their system, including its training process.  They must document how the data was obtained and selected and what measures they have in place to detect unsuitable data sources and identifiable biases. A sufficiently detailed summary on the content used for training must be publicly available. 

According to the Act, users of AI chatbots should be made aware that they are interacting with a machine so that they can take an informed decision on how to handle the information they are being given. In practice this means that all AI-generated content such as text, audio or video must carry a ‘watermark’ or other signal.  An example is when an AI system, such as Midjourney or DALL-E, is used to generate or manipulate image, audio or video content to resemble existing persons, objects, places or events and may falsely appear to be authentic or truthful – called “deep fakes’’. Here, there is a legal obligation for companies to disclose that such content has been generated through AI tools.

The stance of the US

All these provisions are far-reaching, those on reporting about the training data content used and the copyright thereof in particular. They have also elicited push-back. In February 2025 US Vice President J.D. Vance told Europeans that their “massive” regulations on artificial intelligence would strangle the technology (and, by unspoken implication, hinder the forward march of the US’s dominant AI companies)

US tech giants and the administration of President Donald Trump are now working together to weaken efforts by other countries to set guardrails, arguing that the industry needs complete freedom to develop.  “Innovation” is presented as the new god to worship, as an end in itself – no matter if it violates human rights. In the US, hundreds of staff of the AI Safety Institute (AISI) were summarily fired. The institute had been created to enable government to evaluate and ensure the safety of the most advanced AI models, and draw up guidelines for government agencies to improve risk management around buying AI systems.

In March 2025 the umbrella US agency, the National Institute of Standards and Technology, issued new instructions to scientists that partner with the AISI. They are expected to eliminate any mention of the terms “AI safety”, “responsible AI” and “AI Fairness” in their work and to prioritize “reducing ideological bias to enable human flourishing and economic competitiveness”.

AFRICA'S RESPONSE

The Continental AI Strategy adopted by the African Union’s Executive Council in July 2024 states that the development of AI and the societal and economic changes it will bring are “just beginning”. The AU Commission is to “develop a 5 Year Implementation Plan of the Continental AI Strategy that considers the variations and disparities between AU Member States in key capabilities that underpin AI development as well as different levels of development and digital readiness.” To this end the Commission will “conduct an African-led research to assess the short-, medium-, and long-term risks of AI to African people, societies, economies, labour market, value systems, and their futures”. And it will “engage multistakeholder and multidisciplinary policy dialogues on diverse issues of AI in Africa”.

Amongst good pointers is reference to the role of media in AI, missing from many national AI strategies. [As more and more content comes from “synthetic” AI outputs, so it will feed back into the system, with quality journalism as bastion of original content]. 

Good intentions, surely with the AI strategy. The question is: What impact will AI have had by the time any effective action is taken at country-level?

JOURNALISM IN THE AI AGE

In the news media field, already under considerable economic pressure, perhaps the biggest worry is the potent temptation that AI offers to practitioners. Will journalists now use machine-generated texts as the basis for their output – or pass it off as their own work right away? Perhaps they will just ask the system for variations in length, style or detail? Or simply copy and paste material without adding journalistic value by fact-checking or seeking to obtain their own angle on a story through on-the-ground research? Will media owners encourage them to do so because it cuts costs? 

One thing is clear: news media enterprises need internal policies on how they use and disclose AI services, and there should be a push for such policies where they don’t exist. 

Media professionals themselves will have a crucial and indispensable role to play in how societies deal with the challenge of information generated by AI. With text, audio and image forgery having become so easy, journalists and reporters as eyewitnesses with first-hand knowledge of events and historical background could position themselves to be the most credible source. They have to go out and collect the evidence, ask the hard questions, find the story behind the story, and reveal who propagates what kind of info and for what possible purpose. This is what true journalists have learnt to do and what media users – with appreciation for the press freedom that enables this – have learnt to value. It is what civil society should demand of the news media – as well as of influencers and bloggers trafficking in news and political commentary in the age of AI.

IN SUMMARY

Artificial Intelligence is here to stay, even though some of the AI giants will not be around when the over-investment bubble bursts. Our future depends on how we deal with the tech and its providers, and how they use and monetise our – and media – data. AI is an extremely useful tool in communications as in other fields, and potentially extremely dangerous at the same time. It must not be left in the hands of tech-freaks or subject to the free play of dominant market forces. Nor should it be put under authoritarian state control. A risk-based approach, tailored to African possibilities and opportunities, can draw from the EU’s position – keeping in mind that regulation of foreign digital services and their local use will be very challenging on the continent.

 A first call is for AI in Africa to be shaped through multi-stakeholder regulation – to make sure that humans keep control over the technology. In this way, we work for its development and deployment to serve for our collective advantage as members of democratic societies with human rights that need respecting and supporting. Critical and agenda-setting reporting by journalists, and activism by civil society, are key factors as to whether this will happen or not.

This INFO BITE is selected from the online course on Media
and Digital Policy in Africa, offered by Stellenbosch University
in association with Namibia Media Trust.

There are free and paid options available for the full course.

Explore more BITES on a number of related topics