(CoE) Tech regulation and innovation should go hand in hand
Date of article: 12/05/2025
Daily News of: 14/05/2025
Country: EUROPE
Author: Commissioner for Human Rights - Council of Europe
Article language: en
The Commissioner participated in the European Dialogue on Internet Governance, which had the overarching theme: “Safeguarding human rights by balancing regulation and innovation”. Below is the published version of his introductory remarks.
“Dear Ministers, dear Secretary General of EuroDIG, dear friends,
last Saturday, we watched as the newly elected Pope explained why he had chosen the name Leo. He made reference to a predecessor of his, Leo XIII, and explained that that Pope had carried out his tasks in the context of the First Great Industrial Revolution and now he, Leo XIV, must respond to another great industrial revolution, the revolution on artificial intelligence (AI).
In other words, he must engage the challenges of, and I quote, “human dignity, justice and labour”. Now, Pope Leo has his faith-based tools to engage these great issues of society and we also have our tools. Above all, we have the toolbox of human rights, the toolbox of the great laws and institutions which we have so carefully crafted since the Second World War.
A starting assumption when I make that statement is that we already have a lot of guidance in terms of the operation of the internet and of artificial intelligence. We have multiple treaties negotiated over years, all of which are binding on states. We have at the national level many instruments and bodies already in place to provide guidance in these contexts.
We have privacy laws. We have the operation of privacy oversight bodies. In the EU, there is the GDPR.
And even in the private sector, we have considerable existing human rights guidance for how business should do its work in every sector. I think above all else, of the United Nations' Guiding Principles on Business and Human Rights. So, we are not operating in some kind of a legal terra nullius.
But of course, we have long recognised that we do need dedicated instruments to regulate the specific context of the internet and artificial intelligence. That is the frame in which we have seen at least three very important initiatives.
The first is the negotiation in this house of the AI Framework Convention.
Then, in the EU setting, there is the EU AI Act and the Digital Services Act.
But before I go any further in praising such instruments, I have to engage the challenge behind the title for our conference, as previous speakers have done as well.
There is increasingly loud rhetoric out there in society that somehow regulation gets in the way of innovation and the time has come to talk less about regulation and more about innovation. There is a context for all of this that somehow Europe lags behind the rest of the world. It is suggested that if it was not so besotted with regulation, it would be so much more successful.
I take the opportunity this afternoon to refute that assertion. Let me give you four reasons.
The first is that our states have a duty to keep us safe. It is as simple as that. Wherever there is risk, our states have an obligation, be it under international human rights law or be it under any other body of law, to protect us.
They must protect us in the context of the areas we are discussing today, as with any other.
Second, it is not just about protection. It is also about my conviction that safe technology is more trustworthy technology and more trustworthy technology ultimately will win out, including commercially. I am confident of this, perhaps not immediately, perhaps not even in the medium term, but in the longer term. The safer the technology, the more the pickup and the application and the use across the world.
Third, the assumption that somehow we lag behind in Europe because of regulation is most loudly proclaimed by those who clearly pay no attention to the content of the regulation, because the principal European instruments are subtle, nimble and well attuned and full of the nuance necessary to avoid the risk of stifling innovation.
Take the Council of Europe AI Framework Convention. The Framework Convention contains very powerful, important, essential principles, but then leaves a wide margin to states in how to actually deliver them, how to implement them, how to convert them into national regulation. That is not a stifling of innovation. That is a promoting of innovation.
Look also at the AI Act of the European Union with its so-called risk pyramid. The risk pyramid is a very deliberately, carefully and smartly designed method whereby most AI will not fall under a strict external oversight. Rather, its safety will be determined by self-regulation.
The fourth and the final of my reasons to refute these claims is because I simply do not buy into the zero-sum game, the idea that more regulation in Europe stifles European innovation and so forth. And I am very glad that recent academic research supports me in this regard.
I am particularly impressed by an important article published by Professor Anu Bradford of Columbia University just last year. She gave five reasons of why Europe lags behind in innovation.
First, she mentioned the absence in the EU of a digital single market.
Second, she pointed to the European reality that we have shallow and fragmented capital markets. You cannot get the money to do the research.
Third, she mentioned how in Europe many countries have punitive bankruptcy laws which make industry reluctant to engage on risk.
Fourth, she spoke to a more general cultural risk aversion on this continent, quite at odds with the culture of, let us say, the United States.
Fifth, she referred to how we limit immigration into our countries and how that impedes access to the global talent market and leads to skills deficit.
Again, her assertion is that these five reasons are the base for Europe lagging behind, not regulation.
So, my friends, as I wrap up these remarks, what I would call on us to do is not waste time on a regulation versus innovation reflection. Get rid of the zero-sum game approach and let's focus instead on getting the best possible regulation.
I will name just briefly six things we can do now.
One is to get the Framework Convention ratified, get the sufficient number of ratifications in place so that it can come into force.
Secondly, let's make sure that the EU does not lose its nerve. Let's make sure that it insists on full enforcement of the Digital Services Act.
Third, as we move along the pathway to the coming into force in the EU of the AI Act, let's make sure that it is set up both at the EU level and at the EU member state level and whereby it will genuinely protect all of our human rights.
Fourth, we need to support the private sector to do its own self-regulation in the regulatory context. One obvious need is to fill the space with codes of practice. It is already happening, but more is needed.
Fifth, we need smart, clever human rights assessment tools to be used both for regulation, external and self-regulation. I would like to join with the Secretary General and other speakers in a shout out for the HUDERIA tool, which I believe is groundbreaking and will be of great importance.
Sixth and final of my observations about regulation is that it is not finished. We now have to confront artificial general intelligence and artificial superintelligence.
I suspect that our current regulatory models will need to be further supplemented. So, the examination of where we need to go next is no less an important one as how we deliver what we have now.
As we engage these issues, let us again, if I may paraphrase Professor Bradford, let's recalibrate the debate.
Let's avoid the false choice between tech regulation and tech innovation. Let us show how we can and must have both.
Thank you very much."