
08: Regulation of the Internet
For decades the demand for Internet regulation was an absolute no-go. It was hoped that the net would govern itself, and users were expected to flag and expose abuse such as the spreading of hate speech or disinformation. With the over commercialisation of the online space, and the consolidation of monopoly platforms, this approach has become unrealistic.
Regulation of online communication is largely still work in progress. But there are some models developing.
WHO SHOULD REGULATE THE INTERNET
At present, the Internet is mainly an arena of solo-regulation by individual companies. Obviously, judging by the amount of online rubbish as well as material that harms human rights, this is not working. On the other hand, replacing solo-regulation by companies by government regulation is not the way to go either.
First, there is the danger this would pose to freedom of expression, given the risks of governments trying to curb critical speech. Should governmental regulation be limited to combatting monopolies, promoting transparency and enforcing data rights? What about clearly harmful content such as involving scams and child sexual abuse?Ssecond, there is also the issue of capacity: will rules for platforms actually be enforceable by individual governments in Africa? Can full national regulation even by democratic states in Africa really work against the international platform behemoths? These are highly relevant questions, not least in the light of the weakness and lack of independence of many state regulators in Africa.
What is quite clear is that regulating platforms cannot be left to governmental bodies only. A wider range of groups need to be involved – bringing the combined weight of civil society, government, and the private sector to bear on the owners of digital services. Social media platforms – along with Generative Artificial Intelligence companies in the online content-production game – need to be involved, of course, but their power needs to be diluted.
The African Union’s Declaration of Principles on Freedom of Expression and Access to Information in Africa says:
“A multistakeholder model of regulation shall be encouraged to develop shared principles, rules decision-making procedures and programmes to shape the use and evolution of the internet.”
UNESCO GUIDELINES
UNESCO advocates for such a multi-stakeholder approach in its “Guidelines on Regulating Digital Platforms”, released at the end of 2023. The document suggests that every stakeholder engaged with digital platforms (and, also, Artificial Intelligence services) as a “user, policymaker, watchdog, or by any other means” should have a say in the making of rules to be complied with by the big digital companies. These rules should be decided through public participation, and they should also require that stakeholders be institutionally represented in the oversight and review of how the companies deal with content.
Curation
One of the key principles outlined by the UNESCO Guidelines deals with the system that platform companies use to organise and recommend content. Currently, the architecture and algorithms of these systems are designed to promote content which will engage and keep users’ attention for as long as possible – so that more data can be extracted and more adverts shown. This is the technical basis of the ‘echo chamber’ or ‘filter bubble’ where users are caught in an increasingly stifling web of one-sided information and opinion.
This kind of curation also works to make sensational, inciteful and hateful messages go viral, because this kind of content is what garners user attention most easily and quickly. Disinformation actors know and exploit this. The present manner of using curation as the driving engine to prioritise and amplify content on the basis of clicks achieved definitely needs to be reined in by regulation.
Moderation
The content of platforms also needs to be moderated. This means reviewing (or pre-screening out) specific items of content that could harm human rights, and applying post-hoc restrictive measures where warranted. Such measures include removing the item or items, limiting their circulation, or applying a warning label to them. To this end, platforms use a combination of algorithmic tools, reports by users, and review by human moderators. In Africa there is consistent criticism about failure of company moderation to make room for local languages, and for the lack of responsiveness to user complaints to these businesses.
Any method of moderation carries risks to free expression. When content takedowns are done clumsily, or prompted by forces hostile to truth, equality and cultural diversity, they can filter out legitimate voices and perspectives. For instance, content showing atrocities may be deleted summarily, even although the documentation of police brutality or war crimes may merit being published. When moderators or algorithms take down “obscene” content, the definition of what constitutes obscenity will differ from country to country. Regularly, platforms headquartered in the USA will remove as ‘obscene’ any depiction of nudity, including in works of art, and then also apply these standards elsewhere. Hate speech, such as against the rights of LGBTI people, is increasingly permitted by some of the USA-based services operating in Africa, although this content is a dangerous violation of individual human rights.
Regulatory authorities
Regulatory authorities can oversee the compliance of platforms with standards for curation and moderation, implementing the parameters set by local legislation. Such authorities, according to the African Commission on Human and Peoples’ Rights, must be “independent and free from economic, political, or other pressures”.
Therefore, appointments to such bodies should be through a participatory and independent merit-based process. This could be done by a selection panel comprising representatives of media and civil society. Another method is for the public to nominate candidates, a parliamentary committee to select and interview nominees, and the list of chosen candidates goes to the president for approval.
Different regulators may have different remits that impact content associated with digital services. An election management body may have rules in its space. A consumer rights protection authority can take on poor responsiveness by companies. Associations dealing with advertising standards and public relations practice can intervene with their constituencies in favour of information integrity. Law enforcement deals with child protection and financial scams. These efforts and institutions need co-ordination, but the alternative of having an overarching regulator in African conditions is expensive. It can also lead to mandate confusion and, more importantly, to centralisation and governmental capture of power that then illegitimately limits free expression – eg. pressuring service providers to de-platform critical voices.
Where fully independent authorities, acting in the public interest, seek to make social media platforms accountable (eg. for carrying hate speech, non-consensual sexual imagery, scams, etc), national sovereignty will often need to be asserted. In 2025, South Africa faced a case where the companies concerned claimed the country’s Information Regulator has no jurisdiction over them. Yet the same companies offer services and make money in the country. An African continental standard could be useful in supporting national efforts to put obligations on platforms.
EUROPE'S DIGITAL SERVICES ACT
After years of preparation, the European Union adopted a Digital Services Act (DSA) in 2024. This is the first of its kind in the world, and its main focus is on protecting the interests of users and the integrity of content on digital media. It is hoped by the EU that this legislation could be a trend-setter for the rest of the world.
The DSA has special provisions for “very large online platforms”, with users per month representing more than 10 % of the population of the European Union. These are, among others, Facebook, Instagram, TikTok, X, Wikipedia and YouTube. “Very large online search engines”, like Google, are also included in the remit. Each member country in the EU has a “digital services co-ordinator” responsible for harmonizing different national regulatory bodies and liaising with EU officials in Brussels.
Specific responsibilities of the big platforms
In the EU, the big players have to provide clear information on their recommendations algorithms, which are part of their content curation systems. According to the DSA, individual users must be easily able to select and modify such systems and thus enable them to determine the order in which information is presented to them. To combat de facto lock-in known as “network effects”, whe/re people can’t easily leave a digital service since their histories and friends are tied up there, the EU is also looking at data portability and data interoperability.
Under the DSA, the big players also have specific responsibilities when it comes to systemic risks such as disinformation, cyber violence against women, harm to minors online, hoaxes and manipulation during pandemics. They have to perform an annual risk assessment and take corresponding risk mitigation measures, and report thereon for audit purposes. They must also give regulatory bodies, either nationally or at EU level, access to data. In addition, they have to allow data access for researchers to scrutinise how platforms work and how systemic online risks evolve. For the first time, platforms are thus obliged to disclose the systems they use.
The DSA also provides for ‘trusted flaggers’. The status of such flaggers is to be awarded by national communications authorities to entities that have particular expertise and competence in the field, and are independent from any online platform. Whenever such a recognised flagger gives notice to a platform about questionable content, this must be “processed and decided upon with priority and without delay”.
The platform then has to consider whether the content is ‘illegal’. The DSA itself does not contain provisions on specific content but leaves the definition of ‘illegal content’ mainly to member states. Specific EU rules contain binding definitions on serious offences such as terrorist content, child sexual abuse material or illegal hate speech.
Regarding very large operators, the Commission has investigatory powers and can impose fines of up to a maximum six percent of annual worldwide turnover if a provider does not comply with an obligation in the law, for example by not adapting its recommendation system or rectifying ‘systemic risks’. This is serious money. Meta, for example, had a turnover of 164,5 billion US$ in 2024 – so non-compliance could result in a maximum fine of nearly ten billion US$.
WHAT ABOUT THE AFRICAN UNION?
The African Union has no such guidelines or regulations yet. It is obviously difficult to develop a common approach on a diverse continent such as ours. This has been the experience in the case of the African Convention on Cyber Security and Personal Data Protection drafted in 2011 and adopted in 2014. Known as the Malabo Convention, by mid-2023 it had only been ratified by just 15 out of 55 AU members, the minimum number to come into force. And there is no mechanism to ensure that all AU member states will take action on the Convention.
The extent to which individual African countries may take up and adapt standards like the DSA is likely going to be in flux for many years. The risk of governments “cherry-picking” some aspects of a DSA-like system, like legitimising the involvement of state bodies in implementation, is high.
This is why the community of users, civil society and freedom of expression activists need to be involved right from the start and make sure to steer continental and national processes in the right direction. Only with the backing of broad societal alliances will there be a scenario where states bring sufficient pressure to bear on platform owners to comply with commonly agreed rules while, at the same time, avoiding over-regulation that allows states to violate online freedom of expression.
IN SUMMARY
When embarking on the regulation of the Internet, multistakeholder arrangements are the way to go, so that the purpose of online regulation is not lost sight of. The aim is not to stifle free speech or allow for states to co-opt platforms into becoming instruments of government control. Quite the opposite: The aim is to allow for the fullest possible scope of legitimate expression on the net, in line with basic human rights principles.
This INFO BITE is selected from the online course on Media
and Digital Policy in Africa, offered by Stellenbosch University
in association with Namibia Media Trust.
There are free and paid options available for the full course.
Explore more BITES on a number of related topics
