Transcript of the keynote address delivered by the Council of Europe Commissioner for Human Rights, Michael O’Flaherty at the Parliamentary Conference on Artificial Intelligence, organised by the Parliamentary Assembly of the Council of Europe and the parliament of the United Kingdom
House of Commons, Palace of Westminster, 15 December 2025
President of the Parliamentary Assembly of the Council of Europe, Deputy Secretary-General of the Council of Europe, honourable members of this and other Parliaments, dear friends.
Earlier this year, the Catholic Church elected a new leader, he chose the name Leo, and he made clear at the time, he had a very specific reason. He wanted to associate himself with a former Leo, who had been Pope during the last great industrial revolution. He considered that we are living in another great industrial revolution, that of artificial intelligence. Leo, in other words, reminded us very forcefully this year of the extent to which AI is of epochal significance, of significance for good, for the potential possibility to transform our lives for the better, and to do it with unprecedented efficiency.
But of course, great benefit comes with great risk. What is fascinating about the risks associated with AI is that one can observe a cumulative understanding of risk over the last 15 to 20 years.
It began with risks to privacy. Inevitably, a data-driven technology was going to first trigger such issues.
But very quickly, it became also about discrimination. This risk and reality did not diminish. It got far worse over time, and particularly with the arrival of the LLMs, the large language models. Discrimination in the AI space is a very great problem. It is massively exacerbated by what we used to call dirty data, what we now call slops.
Very closely associated, there is the phenomenon of disinformation, or rather, the capacity of AI to vastly multiply the reach and the impact of disinformation. And I include here the deep fakes that we have referred to already this morning.
More recent dangers, predicted for a long time, but only seen more recently, surround issues of autonomous decision-making. It has capacity for great harm in the context of everything from drones to chatbots.
Reference has been made to the impact on the environment. I read this morning that right now, 25% of the Irish energy grid goes to data centres. And it is predicted to get even worse. It is an issue of electricity and water.
Mr Speaker spoke this morning at the outset around the challenges to the labour market.
And we are all becoming aware, without quite being able to put our finger on it, of the extent to which AI is challenging and engaging issues as human identity and social relations.
In other words, friends, AI has profound impact on human dignity, human well-being, and therefore, for human rights.
That, in turn, triggers a duty on our states to protect us. Here, let me express my appreciation to the United Kingdom for the very important 2023 Bletchley Declaration, which draws strong attention to the protective duty of the state, as have the subsequent AI summits.
As we look at how we should be protected, we have to, as a starting point, recognise that we are not in some kind of legal terra nullius.
We do have the European Convention on Human Rights. We do have the other international human rights instruments. We have their national counterparts. We have privacy protections in our jurisdictions, including the GDPR in the EU setting. Across multiple sectors, there is law, as well as protective law for consumers, all of which plays its role.
But clearly, this was not going to be enough. We need targeted filling of the gaps and the achievement of regulatory coherence.
That is the context for the Council of Europe Framework Convention on AI, of which the Deputy Secretary General spoke just now. I would say one thing about this Framework Convention beyond agreeing on its groundbreaking significance. We need our states to ratify it. It comes to nothing until it is ratified and becomes an operable legal instrument.
That is also the context in which the EU has done, to my mind, very good work in the development of its DigitalServices Act (DSA) and the AI Act. Again, these two are very important regional precedents for how to regulate. I will come back to it in just a few moments, but I would so much encourage the EU to stay firm in the defence of, in the upholding and application of the DSA and the AI Act.
There have been numerous national initiatives that should be mentioned.
I am well aware that here in this Parliament, there is important work ongoing for the development of an AI regulation act. I am also aware that in this parliament, and it is a good precedent to be copied elsewhere, there is an ongoing review by the Joint Committee on Human Rights of the human rights impact of artificial intelligence. I also appreciate the early leadership shown by the German Bundestag with its Committee of Enquiry on AI and the very interesting experiments within parliaments in such countries as the Netherlands and Denmark.
Of course, I acknowledge with respect the work of the Parliamentary Assembly of the Council of Europe, including the recent recommendation on AI and migration.
Drawing from all this experience and activity at the regional and at the national levels, we can at this point in 2025 draw some conclusions around what is needed to deliver effective oversight and regulation in the interest of human dignity, in the interest of human rights. I will very briefly mention seven elements.
The first is that our regulation has to get the scope right. We need a wide AI definition in our oversight. We more or less achieved that. We need to embrace oversight of the public and private sectors in our regulation. We are not so good at that. We need to include within the embrace of oversight, security and military contexts. We are bad at that.
Second, our oversight needs to engage the breadth of risk to human well-being. Our record on this is patchy, sometimes achieved, sometimes not so. The key to achieving it is to use the roadmap of the human rights guarantees, the human rights instruments, such as the European Convention.
Third, to have effective oversight, we need lifecycle compliance testing to ensure the extent to which the technology respects human rights. We are finding that very difficult. Everybody, from the state to industry, is saying, this is tough; we need more help. That is the context in which I so very much welcome the HUDERIA toolbox that has been developed by the Council of Europe, which deserves to be better known and needs to be used.
Fourth of the seven, effective oversight needs strong regulatory bodies. And here we have issues. We have a very uneven delivery of regulatory bodies across Europe, to take this continent as an example. We need strong mandates. We need adequate resources. And we need the necessary expertise. This is where oversight bodies are most challenged. How do you have the expertise to assess risk, the whole thread, the dimensions of human dignity and human well-being?
The key here is an easy one, as has been tried in a number of countries. Use your national human rights institutions. Embed it into the oversight machinery for artificial intelligence. I applaud the fact that three countries so far at the Council of Europe have done this, Ireland, the Netherlands, and Denmark.
Fifth of the seven, where a rights holder has his or her rights violated, we need remedies. No right without a remedy, as we so often say. Again, a patchy progress in this regard. Among the biggest issues that are confronting at national levels are building up awareness among rights holders, consumers, call them what you will, of the possibility of remedy, and then meaningful access to engage the remedy.
Sixth of the seven, it is essential, as AI grows ever more sophisticated, ever more autonomous, that we never lose sight of, never lose respect for the principle of human control. We must always have humans in charge. There is no circumstance in which the machine can be let loose to do its own work without oversight.
I very much applaud that the UN has reminded us of this point with the UN Pact for the Future and the Global Digital Compact. But what the UN is learning, and we are learning now, particularly in the last year, is not any old human oversight. We need smart, empowered, and enabled human oversight that does not get drawn into the trap of anthropomorphism, where we see the machine as a friend, rather than as a piece of kit to be controlled.
And seven, and finally, and this is absolutely critical to every dimension of everything to do with oversight, we need to insist on algorithmic transparency. The persistent myth, the persistent magic box talking of industry is unacceptable. We can gain enough access to the algorithms to ensure oversight and control, and we must persist in resisting the smoke-and-mirrors approach from large parts of the industry.
There is one final contemporary concern I would like to mention before drawing to a close. That is the ever-louder resistance to regulation at all, to oversight at all. Before I give you some specific points, let me just draw an analogy.
Can you imagine if we were sitting here – this building was there then – if we were sitting in this building in 1900, and this Parliament was debating whether to put in place rules for the road and Mr. Henry Ford came along, took a committee like this, and said: “no, no, no, we don't need rules of the road, each car will self-regulate”. Preposterous.
Well, why do we need regulation?
Well, in the first place, there is, as I mentioned, a formal, explicit, legal, protective duty of our states to take care of us.
Second, there is the obvious link between smart oversight and trust. I was struck by the publication on the 4th of December here in the UK by the Ada Lovelace Institute, which I believe is represented in the room this morning, of research which confirms, at least for this country, that the public is insisting on, is demanding safe and well-regulated artificial intelligence.
I believe this should be some reassurance to us in terms of staying firm.
But most centrally of all, we have to push back against the myth that regulation stifles innovation. It is simply not true. Look at China. China is the most regulated AI industry on Earth. And yet, it is one of the most creative.
So how can one explain that? Europe, for sure, lags behind the United States when it comes to innovation. There is no doubt about that. But let us not blame regulation.
There is fascinating research emerging that supports that assertion of mine. Professor Anu Bradford of Colombia University recently published an article in which she acknowledged that Europe lags behind but says that it is for a number of reasons that have nothing to do with regulation. It has to do with shallow, fragmented capital markets, punitive bankruptcy laws, risk-aversion culture on this continent, immigration policies that limit access to global skills pools, and in the specific context of the European Union, the absence of a digital single market.
So, dear friends, let me wrap up by just simply recalling, as all the other speakers have done this morning, that we live in a moment of extraordinary consequence for human well-being, for human rights. Those of you here in the room who are parliamentarians, you carry a most heavy responsibility to guide us forward. We may support you in your role, but we have to look to you for the necessary evidence-driven leadership.
Thank you.