The author argues that while South Africa has made significant strides toward AI sovereignty, these achievements will remain incomplete without a dedicated legal framework that addresses the full AI lifecycle and reflects the country’s unique social and cultural context…

As we stand in February 2026, there is no industry or area of our daily lives that has not been infiltrated, touched, or impacted by Artificial Intelligence (AI)—whether we are aware of it or not. This is not just true globally but even here in South Africa. If it’s not being used by your search engine to improve your searches online, it is on your favourite social media ensuring that you receive content that you like or prefer, or it is your customer service when you are booking a flight. If the government or bank is not using it to improve the efficiency of service delivery, then it is the doctor using it for diagnostic purposes or your teacher using it to prepare class notes, and if you think you have escaped it, look at your digital gadgets and there you will find it. It is everywhere, literally, and is being touted as the greatest thing since sliced bread because it promises efficiency, accuracy, and convenience. For African states there is additional pressure to join the bandwagon of AI enthusiasts, and the story goes that if the continent does not, it may be doomed to perish. Now whether this is true or not is a story for another day. However, what is evident is that when it comes to African states, AI is being touted as a magical wand that will miraculously cure the continent’s economic and development woes, and fast-forward it to attain everything from our Sustainable Development Goals (SDGs) to perhaps even developed-country status and other glorious possibilities. Never mind that AI in its current form is largely developed without sufficient consideration for African contexts, priorities, or socio-cultural realities, and this alone has the potential to cause a range of significant and disproportionate risks across the continent. On an elemental level, this exclusion of African contexts and realities in the development of AI not only exacerbates existing social inequalities, but also promotes technological solutions that may not always capture the experiences and priorities of African people on the ground. These intersecting dynamics do not create a fair playground in which AI in its current form will be mutually beneficial for African states. Indeed, the current situation—whether we are aware of it or not—has the potential to undermine digital sovereignty and maybe even replicate global structural inequalities which were birthed during and in the aftermath of colonisation. Hence, if African states choose to throw caution to the wind and blindly jump onto this AI bandwagon in the name of not being left behind, the consequences could be detrimental and leave African states even more disadvantaged and vulnerable than in the pre-AI era.

So, what is the solution, if any? Word around academic and scientific circles is that AI sovereignty is the answer. It is proclaimed as the path that all states—and not just African ones—should take in order to not only benefit from AI’s many advantages, but also to counteract the risks it poses simultaneously. Nonetheless, although AI sovereignty has become the new buzzword or catchphrase for anyone who is about town or wants to be perceived as being one, there is no universally accepted definition. Notably, though, according to the Coalition on Data and Artificial Intelligence Governance (DAIG), a multistakeholder group that was established under the auspices of the United Nations Internet Governance Forum (UN IGF), the concept of AI sovereignty was first coined by Luca Belli in 2023. He defines it as “a given country’s capacity to understand, muster and develop AI systems, while retaining control, agency, and ultimately, self-determination over such systems.” This basically means that AI sovereignty is all about having control over the use of AI within a specific jurisdiction—in this case, South Africa. This involves having control over how personal data (information), which is AI’s raw or foundational material, is collected and processed, and further how AI is developed using the data that is collected and eventually deployed. It also includes controlling the algorithms, models, and infrastructure. For African countries like South Africa, attaining AI sovereignty would also mean not only having an invitation or merely a seat at the world stage where important decisions on AI are being made, but also having the power to influence. This will be accomplished by becoming an active participant in AI innovation rather than a passive consumer, a position in which most African states unfortunately seem to find themselves currently.

In the last two years or so, South Africa has taken key steps on the path to attaining AI sovereignty. A key component of AI sovereignty is data sovereignty. At its core, data sovereignty is about ensuring that when personal data is collected, it is governed by the laws of the country where it is collected, and this is facilitated by having data storage facilities within a state’s jurisdiction. According to the Data Centre Map website, South Africa has 61 data centres—the highest number of data centres of any African state.

Further, in order to have an infrastructure that works locally, South Africa also needs to invest in AI literacy and in having the technical skills and institutional resources. Although a lot still needs to be done in terms of investing in AI skill development, some progress has been made in the right direction. For example, in 2025 South African government partnerships and training programmes provided a variety of AI literacy opportunities. The Department of Higher Education and Training worked with private companies to offer AI and digital skills courses, including AI engineering and leadership training at TVET colleges. Also, the Department of Communications and Digital Technologies is supporting initiatives like the National Artificial Intelligence Stakeholder Forum, which was officially launched on 7 August 2025 and brought together key role-players from across the AI value chain—including researchers, engineers, innovators, ethicists, public servants, academics, and private sector leaders—in order to co-create a shared vision for South Africa’s AI future.

Altron, a South African technology group, announced on 27 October 2025 the successful deployment of the country’s first operational AI factory, powered by NVIDIA AI infrastructure including NVIDIA accelerated computing and NVIDIA AI Enterprise software. According to the NVIDIA glossary, an AI factory is a specialised computing infrastructure designed to create value from data by managing the entire AI lifecycle, from data collection to training, fine-tuning, and high-volume AI inference. According to Harvard Business School, an AI factory has four key components: a data pipeline (sources of data from both public and private platforms); algorithm development to transform data into actionable insights; software infrastructure; and an experimentation platform, where AI can be tested, refined, and optimised. Thus, having an AI factory or factories not only positions South Africa as an AI system innovator, but can also ensure that AI systems that are developed capture the South African local context, and that the whole AI lifecycle is protected by domestic law.

However, while all the above progress is commendable, South Africa will not achieve AI sovereignty or even come close to it unless it also adopts laws that specifically regulate AI. There has been progress, but not enough to keep up with the development of the many data centres or the current AI factory. This is because law forms the foundation or the core of AI sovereignty: it ensures that all the parts work efficiently and within prescribed boundaries that protect autonomy, accountability, human rights, national interests, values, and societal priorities.

While South Africa does not have a comprehensive legal framework for AI, it does have a draft national AI policy framework, which was developed by the Department of Communications and Digital Technologies (DCDT) and which completed its consultation phase in 2025. The AI policy framework in its current form is in alignment with international ethical safety standards such as those found in the UNESCO Recommendation, OECD AI Principles, and the Council of Europe’s AI Treaty, which have been identified as the gold standards of AI regulation. The policy focuses on fairness, transparency, accountability, privacy, safety, human oversight, and cultural values, promoting responsible design, strong data-protection rules, cybersecurity, explainable AI, bias mitigation, and professional conduct. It also places great emphasis on public awareness and inclusive datasets tailored to South Africa’s diverse context. However, the policy goes a step further, and unlike the international standards, its most unique contribution is its emphasis on the inclusion or recognition of South Africa’s cultural and societal values as a standard for AI regulation. Nonetheless, it does not go into much depth or clarify the specific cultural and societal values it is referring to, and hence at this point it is not possible to predict which values will eventually be embedded in the resulting AI law.

At present, AI in South Africa is mainly regulated by existing data privacy legislation, the Protection of Personal Information Act 4 of 2013 (POPIA). POPIA to some extent provides guardrails, but only to a limited extent, as it does not cover the whole AI lifecycle or regulate all the various risks posed by AI. Hence, although POPIA at an elemental level can regulate how identifiable personal data is collected and used, and to some extent how specific AI systems are used, it is limited because it was not specifically meant to regulate AI. When it comes to AI regulation, there is no one-size-fits-all approach, as it is becoming more and more evident that copy-pasting laws will not work and African states in particular will need to tailor-make their AI legal frameworks to fit each country’s specific context. However, there is no harm in learning from what other countries have already done. South Africa can learn from AI laws in the EU, South Korea, Italy, and California, which currently have specific legislation that regulates AI. The EU, for example, has historically often taken a precautionary approach when it comes to regulating technology and innovation, and so it is not a surprise that the EU AI Act adopts a risk-based approach which regulates AI in accordance with the different categories of risk posed to users, and that it places a greater emphasis on safe AI. Consequently, AI that poses great risks faces greater restrictions than that which does not. Further, it provides strict oversight with a strong focus on accountability and transparency, with a key example being the requirement for watermarks, metadata, or other machine-readable marking methods for deployers—and in some cases creators—of deepfakes. This is in order to ensure that artificially generated or manipulated outputs are identifiable through legally required markings. Italy’s AI Act in a similar vein adopts a human-centric AI approach that prioritises transparency and safe AI with hefty penalties for offenders, and notably criminalises the unauthorised dissemination of deepfakes and strengthens copyright protection for creatives’ intellectual works. California passed Senate Bill 53, an AI law, in September 2025. Ironically, the state that’s home to 32 of the 50 top AI companies globally provides one of the most rigorous transparency regulation requirements. It delivers a great template of what it looks like to provide a law with transparency requirements for the whole lifecycle of AI, from collection and analysis of personal data, to development, to deployment. Nonetheless, on 11 December 2025, President Donald Trump signed an executive order with the goal of halting any laws limiting AI and blocking states from regulating the rapidly emerging technology, thereby prioritising America’s acceleration in AI innovation and global dominance over rigorous regulation. Whether this will have an impact on California’s or other states’ AI laws is left to be seen. Nonetheless, all the above AI regulation approaches provide food for thought for any state like South Africa which is in, or about to engage in, drafting its AI legal framework.

Lastly, whatever approach South Africa takes when it comes to adopting AI law, it will also have to consider the management and regulation of the environmental impact of data centres and AI factories. According to the International Energy Agency (IEA), in 2024 data centre electricity use was about 415 TWh, which is around 1.5% of global electricity consumption, with the IEA predicting that data centres will drive more than 20 per cent of the growth in electricity demand between 2025 and 2030. Additionally, the United Nations Environment Programme (UNEP) estimates that a data centre with 1 megawatt capacity may use up to 25.5 million litres of water per year for cooling, comparable to the daily water use of around 300,000 people. This water and electricity dependency by data centres has the potential to pose serious environmental challenges not just in South Africa but in other African states which have also set up data centres, like Kenya, Nigeria, Tanzania, and Angola. These are significant environmental impacts which will require proper regulation and management.

Hence, although South Africa has made great progress in the race to attain AI sovereignty—with everything from investment in AI literacy, to the creation of data centres, to launching Africa’s first AI factory—without the proper laws to regulate all the above AI developments, AI sovereignty will remain unattainable.

Dr. Shirley Genga is a Postdoctoral Fellow at the Free State Centre for Human Rights in South Africa. She conducts research on the intersection of artificial intelligence, human rights, and the law. 

Opinions expressed in JURIST Commentary are the sole responsibility of the author and do not necessarily reflect the views of JURIST’s editors, staff, donors or the University of Pittsburgh.

Comments are closed.