(CoE) Speech: Protecting human rights in the digitalisation of social welfare systems

Date of article: 18/03/2026

Daily News of: 20/03/2026

Country:  EUROPE

Author: (CoE) Commissioner for Human Rights

Article language: en

Speech delivered on the occasion of the Side event: "Digitalisation of social protection systems in Europe -The promise of efficiency versus the reality of exclusion" of the High level Conference on Social Rights, Chișinău, Republic of Moldova

This week, we are going to focus attention on the Charter of Social Rights. We are going to focus on social issues as human rights and therefore binding obligations on states.

That is obviously very welcome, but it is pretty empty if we do not pay equal attention to the delivery of those rights. How do we move from the fine principle on paper to the change in lived experience of the human being. It is in that context that I so very much welcome this discussion. Because social welfare systems are among the most important of the deliverers of the formal human rights duty.

In what sense? Well, obviously, they deliver the services they offer. We have human rights to the highest attainable levels of healthcare. We have human rights to benefits  when unemployed. We have human rights to all manner of things in the social context. Welfare systems deliver on those, but just as importantly they enable for everything else. They empower the rights holder to enjoy every aspect of their human rights. And, by the way, not just social rights, but their civil rights, every human right.

In that context I welcome the digitalisation of social welfare systems.

We have already seen how AI can strengthen that delivery. We see it already, at least in some places, in terms of improved client support, automated back office support and fraud detection. What is more, the OECD, in a very interesting study of last year, identified further spaces for the digitalisation of social welfare, such as for predictive analysis, forecasting demand and shocks, predictive analytics to improve client identification, enhanced outreach and reduction of the non-take up of social welfare services.

But, of course, side by side with all of the advantages that digitalisation brings to social welfare systems, it is also a very hazardous undertaking. This was very strongly messaged to us, at least those of you in the EU, by the manner in which the EU AI Act characterises welfare algorithms as “high risk”.

The high risk of welfare algorithms and of the digitalisation of social welfare was well illustrated in the famous case in the Netherlands regarding child care benefits. A scandal so great that you will recall that it brought down a government.

More recently, we see again, across Europe instance after instance of problems generated by the digitalisation of welfare. We see for example allegations with regard to social welfare systems in Serbia, France, Denmark and elsewhere.

Learning from such situations and from recent empirical research, I suggest that the levels of risk with the digitalisation of social welfare can be broadly categorised into five.

The first has to do with why we digitalise at all. It is clear from empirical research, when you ask the users and the appliers of technology that the primary driver is not about the quality of the service, it is about the speed and efficiency of the service. There is nothing wrong with speed and efficiency, but when that is preferred over quality, then obviously you can see the danger.

Secondly, and drawing from the examples I gave, we have seen the manner in which technology can produce discriminatory outcomes. And much more evident over recent years how through the application of feedback loops the discriminatory outcome can get worse and worse over time.

Third of the five, and this is quite recent, very interesting psychological research in the last couple of years has identified something called the “automation bias”. This is the situation where the human overseer of the technology trusts that the technology is going to do a better job than the human. And so, therefore, when there is a clash between the human assessment and the machine assessment, the human will opt for the machine assessment.

Fourth, there are the challenges of the access to digitised services and the extent to which we experience digital illiteracy in our society. The Fundamental Rights Agency in 2023 identified through one of its large scale surveys that only one in four people over the age of 65 has minimum digital literary skills. Only a quarter of people over the age of 65. This obviously is a red flag in terms of requiring people to access their social welfare services digitally. And by the way, I have given an example of older people, but you can think of so many marginalised people on the edges of our societies and the extent to which a digitised service becomes a remote and inaccessible one. I think, for example, of Roma in irregular settlements where they do not even have electricity, never mind access to digital services.

The fifth and the final of the concerns regarding the digitalisation of social welfare is that we implement it opaquely. Most people do not know the extent to which their social welfare entitlements have moved online, or some element of the assessment exercise has moved online. Governments have done a poor job of alerting their populations to the extent to which these essential services have been automated. And then in turn, of course, this raises serious issues around access to remedies when something goes wrong. How can you access a remedy when you do not quite know how and where the error offered.

What can we do to address these five categories and make sure that we have a digitised social welfare future that is really at the service of our peoples? Again, I would like to suggest just a few things.

The first, concerns those of you from the EU and in particular those of you from EU governments.

I encourage you to defend the fundamental European legislation that governs the digital space. I am referring here to the AI Act and the Digital Services Act. These are not perfect instruments, but they are probably the world's best models for the oversight of the rollout of artificial intelligence and all related aspects of digitalisation, including in the social welfare context. And so those of you here from EU governments, but please transmit the message. There is an ongoing so-called simplification exercise which ultimately, if all of its proposals were to be adopted, would in effect weaken these two essential bits of legislation.

Secondly, we need our governments to take the necessary steps to sign and to ratify the Council of Europe Framework Convention on Artificial Intelligence. It has been neglected. We have nothing like enough signatures and ratifications yet.It is only once it is in effect that we will have the normative tool whereby we can work with member states of the Council of Europe to put in place effective human rights-based national oversight systems, including in the social welfare space.

Third, we need our states to engage with and adopt the tools whereby they can do human rights testing and assessment of algorithms for the delivery of social welfare.

I would like to commend here the excellent Council of Europe tool, the so-called HUDERIA Human Rights Assessment Tool, as very fit for engagement and use. I think we could apply it in our own specific national contexts.

A couple of other points in terms of what we need to do. We need to make sure that on, the one hand, humans remain in charge, that we never cede decision-making to the machine, but then of course that we deal with “automation bias”. That we train those who oversee the technical tools to recognise that they are probably smarter than the tool, than the digital application, and that they need to watch it with great vigilance.

Then of course it goes without saying that there is the need to invest in digital literacy and the improvement of effective access to the digital space, particularly for older people and for those on the edges of societies.

Let me wrap up, by mentioning two roles that I consider must be included in our engagement as we go forward.

I refer to the importance of inviting into a central position in our work our national human rights institutions and civil society organisations.

Take national human rights institutions first. They have moved very fast across Europe in recent years in embracing their responsibility in the context of artificial intelligence. And they match that with their profound human rights experience. And we need to make sure they are integrally consulted, woven in as I said to the work.

Secondly civil society. How often is it civil society that is alerting us to how tech can go wrong? And then how often has it been that having alerted us to how tech can go wrong, it is civil society, not governments, that find the fix. We have to have a profound partnership.

And my very last point, dear friends, it has to do with trust. Delivery of safe and effective social welfare is one of the most sensitive and important dimensions of governance in any of our countries. It will only work if it is trusted. We have seen how easily things can go, do go wrong and can go wrong. And I would like to invite you to reflect on how you can build and enhance and strengthen the trust between social welfare systems and citizens through that engagement with civil society and with national human rights institutions and proceeding forward in as transparent and consultative a manner as is possible.

Thank you for your attention.

Read more