Avis sur la proposition de loi "visant à prioriser les travailleurs dans l’attribution de logements sociaux"

Date of article: 29/01/2025

Daily News of: 05/02/2025

Country:  France

Author:

Article language: fr

Le 3 décembre 2024, une proposition de loi "visant à prioriser les travailleurs dans l’attribution de logements sociaux" a été déposée à l’Assemblée nationale. Dans le cadre de ses missions, la Défenseure des droits a rendu un avis sur ce texte le mardi 28 janvier.

L’article L. 441-1 du Code de la construction et de l’habitation (CCH) énumère des publics prioritaires pour l’attribution de logements sociaux et prévoit des catégories considérées comme particulièrement vulnérables, telles que les personnes en situation de handicap, les personnes sans-abris ou les victimes de violences conjugales.

La proposition de loi propose d’ajouter au début de cette liste « les personnes en activité professionnelle ».

L’avis 25-01 de la Défenseure des droits rappelle que les travailleurs en situation de précarité sont déjà pris en compte dans  le droit en vigueur. L’article L. 441-1 du CCH inclut les publics confrontés à des difficultés spécifiques pour accéder à un logement, ce qui englobe les travailleurs en situation de précarité. En outre, depuis la loi « 3DS », des dispositions existent pour faciliter l’accès au logement des travailleurs dits « essentiels » résidant loin de leur lieu de travail.

Pour la Défenseure des droits, la priorisation des « personnes en activité professionnelle » dans l’attribution d’un logement social alimente surtout une concurrence des publics prioritaires susceptible d’engendrer des pratiques discriminatoires. Cette priorisation pourrait conduire à défavoriser les personnes sans emploi accentuant ainsi les inégalités d’accès au logement social.

De plus, la proposition ainsi rédigée propose de prioriser l’ensemble des « personnes en activité professionnelle » et ne cible pas seulement des travailleurs précaires ou des actifs aux revenus modestes. Dès lors, dans un contexte de saturation du parc social, cette réforme ne garantit pas, contrairement à son objectif affiché, que les travailleurs en situation de précarité verront leur situation améliorée

Read more

Ángel Gabilondo, ha mantenido un encuentro con participantes en el XX Curso para Asesores Parlamentarios

Date of article: 04/02/2025

Daily News of: 05/02/2025

Country:  Spain

Author:

Article language: es

El Defensor del Pueblo, Ángel Gabilondo, ha mantenido un encuentro este martes en la sede de la institución con participantes en la XX Edición del Curso para Asesores Parlamentarios, organizado por la Dirección de Relaciones Internacionales del Congreso de los Diputados.

Previamente, el secretario general del Defensor del Pueblo, José Manuel Sánchez Saudinós, ha sido el encargado de explicarles el funcionamiento y trabajo que realiza la institución Defensor del Pueblo.

Este curso, en el que visitan diferentes organismos e instituciones del Estado, está dirigido a funcionarios de los parlamentos de América Latina y de algunos países de África. También asisten asesores parlamentarios de otros países europeos.

En esta edición han participado cerca de 40 asesores procedentes de 25 países: Argentina, Bolivia, Chile, Colombia, Costa Rica, Cuba, Ecuador, El Salvador, Guatemala, México, Panamá, Paraguay, Perú, Uruguay, Argelia, Guinea Ecuatorial, Marruecos, Mozambique, Túnez, Albania, Macedonia del Norte, Montenegro, Portugal, Turquía y Ucrania

Read more

(CoE) Human rights oversight of artificial intelligence

Date of article: 30/01/2025

Daily News of: 05/02/2025

Country:  EUROPE

Author: Commissioner for Human Rights - Council of Europe

Article language: en

Speech delivered at the Embassy of Ireland in Paris.

Monsieur l’Ambassadeur,

Excellences,

Mesdames et Messieurs,

Chers amis,

Many years ago, I was sitting on a hotel terrace, looking across the lawn. As I sat there, I became aware of a little machine going up and down the grass, cutting as it went. It was my first encounter with a robot lawn mower.

I was transfixed. For a good half an hour, I watched as it made its journey, up and down, a thousand times. And at a certain point, I began to feel sorry for it. That it was doing this job with no gratitude, no recognition, no encouragement. I had to restrain myself from going over to pat it.

But what I remember about that moment, above all, is a genuine sense of awe. It was my first encounter with robotic technology in any meaningful way. And I was deeply impressed.

Much has moved on since that first encounter, but I never cease to be in awe of AI and its potential for human thriving and well-being.

Just think about the transformative impact it can have on vaccine development. Amidst the worst pandemic since 1920, thanks to a new machine learning tool known as Smart Data Query, the COVID-19 vaccine clinical trial data was ready to be reviewed a mere 22 hours after meeting the primary efficacy case counts. Previously this would only be possible after a 30-day trial phase. Similar stories can be told across sectors, all about saving and improving lives.

But of course, given my work at the Council of Europe, I am no less aware of the risks that AI pose for us, and for our societies.

Here are just five contexts in which AI can get it badly wrong.

The first is the well-known one of discrimination and all that is related. AI hoovers up every fact, every datum in our world, with all of the discriminations, the hatreds, the biases to be found in that data. And it is well known, this is nothing new. However, beyond mistakes or biases in data, there is also the worrying extent to which data is mistaken. Sometimes this is obvious and deliberate, but sometimes it is very subtle and difficult to spot. And I am not taking about disinformation here, but rather about what is known as “AI Slop”. This very illustrative term describes a vague text, a low-quality AI generated content that is now over-feeding the internet without any control or moderation.

So, it is about bias, it is about mistakes and mess, and it is also about something very specific to tech and that is the role of feedback loops, and the extent to which feedback loops can enlarge error over time and practice. We have looked at that in the context of automated online content moderation. We have seen how a piece of technology that begins benign and does its job relatively well can learn error and then expand the error, with some pretty remarkable consequences.

Just to give you an example: at the Fundamental Right Agency, in my previous capacity, we did research on automated online content moderation, where we developed algorithms and then tested language to see what would happen. As is well known, moderation in lesser-used languages was largely ineffective. But in English, we inserted particular terms. One such term was “I hate Jews”. And the online tech did its job. The term was flagged as problematic speech - exactly what we intended it to do. But then, my colleagues inserted the words “I hate Jews love”. And the machine passed over the term. It did not flag it as problematic because of the power of the word ‘love’ and the associations of the word ‘love’, which, according to the machine, overrode the “I hate” part of the phrase. So again, an example of something rather specific to the online sphere, in terms of how error can multiply.

A related problematic application of AI has to do with the so-called anthropomorphic chatbots and the phenomenon of hallucinations. I am talking about AI that is trained to pretend to have emotions, to care and to love. Commonly, humans think they are engaging with a human-like entity and as you can imagine the consequences can be dreadful. In 2023, a man took his life after a chatbot became his confidante and fuelled his eco-anxiety. It nourished the idea that he would help save the planet if he killed himself.

The second category of risks has to do with the fact of who dominates the tech world: the private sector. There is nothing inherently wrong with the tech world owning technology, but it becomes problematic when that is in the context of something that is so profoundly impacting our lives. And in a context where we know what are the primary drivers for much of the private sector. One important driver is efficiency. We know through empirical research that the most important motivation for investment in technology is to do things quicker and more efficiently, not to do things better. This raises obvious concerns in terms of the safety of applications.

Other private sector drivers, like profit and some kind of world domination, are no less significant but I will not explore them today.

My third concern is the exact converse of the second. That is the extent to which AI enhances the power of the state. This is not inherently problematic - at least if you are in a state that respects democracy. But, and again, I do not need to give examples, it is perfectly obvious how tech in the wrong state hands can be a tool for repression and oppression – and this is sadly something I have to deal with on a daily basis in my current role, for instance regarding misuse of facial recognition technology.

The fourth of my five concerns is the somewhat more apocalyptic one of the transfer or the outsourcing of decision-making to artificial intelligence. We have very many examples here, but again the obvious one is autonomous weapon systems, that can hit targets without human guidance and control. I could continue by mentioning what will hopefully remain in the realm of science-fiction: the idea of machines outsmarting humans and the latter becoming enslaved. As researchers are climbing the ladder towards building this technology, dubbed Artificial General Intelligence (AGI), all these developments should and do strike fear into the heart of anybody who is concerned about the well-being of our world and humanity.

The fifth of my concerns is something a little bit harder to pin down. It is broad, and something seen and understood over time; it is the erosion through the application of AI of our social solidarity. The degradation of the human community, in the sense that, so often today and far more likely in the future, we are dealing with a machine, not with a person.  Although these new ways to interact affect all of us, it may have an even more profound effect on older people, who may feel left behind and excluded. In addition, psychologists and others speak of the risk to mental health of the automation of life.

And so, my concerns and others that I didn’t have time to address lead us to the question of how we tame technology.

If we all accept that we need to tame this awesome power, what should that look like? What solutions can we suggest so that the technology is in the service of human well-being? There are a few frames of reference for how we begin a discussion of how we tame tech but two of the most prominent are through the invocation of the language of ethics on the one hand and the language of human rights on the other. And I welcome that these are the starting points for the work on AI of both UNESCO and the OECD. They are also strongly reflected in Ireland’s AI Strategy.

That said, I am disappointed at how the ethical frame has until now – and I would argue still – dominates. It is as if the ethics and the human rights approaches are contesting, and we must fight our corner so that we dominate. I suggest that ethics has the upper hand because of its inherent subjectivity - my sense of right and good does not have to be the same as your sense of right and good. This means that using ethics to frame the taming of technology, allows us a tool that is malleable, adaptable to our various world-view and objectives.

Turning to the other frame of reference: human rights. Here, we see something rather different. We see a far more sturdy objective infrastructure on which to base standards and practice.

My concern, as Commissioner for Human Rights, is to help put human rights in the centre of the discourse. This is not to displace ethics – it is not a competition, but rather to turn the rhetoric of human rights-based oversight of AI into an applied reality.

Before I get to what that would look like in practice, allow me a brief word on human rights, more generally.  On 10 December 1948, here in Paris, United Nations member states adopted the Universal Declaration of Human Rights; the best effort by humanity, coming out of the horrors of the Second World War, to define the minimum standards for a society where we could thrive and mutually respect each other.  Or, to paraphrase, article one of the Declaration, to achieve a world where everyone is equal in dignity and in rights.

The Universal Declaration has been repeatedly reaffirmed universally and has been at the origin of a sophisticated and binding system of institutions and treaties that protects the rights and freedoms of all human beings.  Here in Europe, this year, we celebrate the 75th anniversary of one of the principals of its regional expressions, the European Convention on Human Rights.

Notwithstanding popular misconceptions, the human rights legal system is rarely about absolutes. It is very insightful in the way that it allows rights to be limited in the interests of the public good. We saw that, sometimes for good and sometimes maybe a bit too enthusiastically, in the context of Covid related restrictions. That period neatly illustrates the extent to which the human rights system accommodates extraordinary crises and issues, and in the public good, allows for the restriction of rights.

So, we have this astonishing achievement of our societies, sometimes described as “modernity’s greatest achievement”, and the question arises of why it has been so peripheral to the discussion about the restraining, the taming, of artificial intelligence. There are many reasons for this. I have already alluded to some, but one that is very important and has preoccupied me for nearly a decade has been a lack of understanding or at least of clarity regarding the application of human rights standards in practice: how the human rights standards and systems apply in the AI context.

And we must do so in the specific context of the “now” of AI.

By the “now”, I am referring to this moment for regulation. In the last year, as you know, there have been important developments. The EU adopted the AI Act and my organisation, the Council of Europe, finalised a Framework Convention on AI. Thus, we now have innovative rules. It is in this specific context, that I would address the drilled-down role of human rights and the extent to which it is adequately addressed in the Act and the Convention.

I have seven elements in mind.

The first is that, in order to deliver for our human rights, the laws must be comprehensive and loophole free.  Let me point to three dimensions to this premise.

In the first place, our regulatory instruments must embrace a sufficiently wide definition of artificial intelligence to capture all current and future technologies that can impact for human well-being. With regard to the AI Act and the Framework Convention I consider that this largely has been achieved through the embrace of a wide definition that originates in the OECD.

Then, we have to make sure that the regulations equally apply to the private and the public sectors. Here we have been less successful, as the new regulations only partially address the private sector.

Furthermore, effective regulation must embrace a full range of risks to human wellbeing. I make this point because of the continued excessive focus on a very narrow band of human rights including privacy and non-discrimination. So much more is involved. Take, for example, the recent complaint by a group of French NGOs against CNAF before the Conseil d’État. This was a case of algorithms wrongly discontinuing social welfare benefits-based on people’s personal characteristics, a matter of respect for socio-economic rights. Are the new instruments sufficiently comprehensive to address such matters? Do the new laws meet this test? In terms of material scope, probably yes, however, the sectorial exclusions are problematic. It is a matter of regret, that, in large part, the security and defence sectors are not covered by the instruments.

The second of my seven elements has to do with the meaningful delivery of the protection of human rights. This requires that technology be subject to testing for the purpose of assessing its impact for human well-being. What is more, testing must be use-case based and needs to continue through the lifecycle of technology.  Here I am encouraged by ongoing developments. The AI Act requires so-called fundamental rights testing for high-risk technologies, with a few exceptions. EU member states are also increasingly employing “regulatory sandboxes” to human rights test technology. And my own organisation, the Council of Europe, has developed a practical and algorithm-neutral human rights impact assessment tool named HUDERIA. Our challenge now is to ensure that human rights impact assessment eventually becomes a gold standard practice for both private and state actors throughout the lifecycle of relevant technologies.

The third element of effective regulation has to do with the need for strong oversight.

In the first place, it is imperative that humans remain in charge: Al systems must be controlled by a human throughout their lifecycle. In this regard, I welcome that this essential guardrail is already enshrined in the EU Al Act and the Framework Convention. I also salute the global consensus on human oversight of technology reflected in the UN Pact for the Future and its Global Digital Compact.

Turning to institutional oversight, it is essential that it is adequate to the job. The bodies need to have the skills. They need to have the resources. If they are protecting human rights, we need human rights specialists working within them. Not just privacy experts but people skilled across all human rights. Here I am encouraged by the emerging practice under the AI Act for national human rights bodies (so called NHRIs) to take on certain oversight functions – as is now the case in a number of countries including Ireland, Denmark and the Netherlands.

The fourth of my seven elements is with regard to a fundamental principle of human rights: that every violation should come with a remedy. To respect this, we need to make effective the pathway to a remedy for somebody whose human dignity has been violated by an application of technology. Here I am impressed by the victim focus in the Framework Convention and I appreciate the ongoing work on the matter in the EU.

And then the fifth element, which could have been my first because it is so absolutely central to the delivery of all of the other dimensions, is ensuring transparency. It is vital for effective oversight and the proper monitoring of technology that there is transparency as to the contents of the technology. What is more, as we see in ongoing copyright controversies, transparency is necessary in order to equitably reward original creators and owners of content used for training purposes.

As you can imagine, this demand for transparency is met with a lot of resistance. Beyond rudimentary commercial considerations, we often hear that, "it is just not possible- we don't know how the tech reaches that good outcome, do not touch it." I have heard that many times, including recently from a doctor carrying out medical research.

These views need to be challenged.  I recognise that there may be huge complexity in terms of effective delivery of transparency but, at a minimum, in the context of tech that we do not quite understand, what is to stop us demanding that you, the designer of the tech, describe what you do, tell us how you have tested your technology, what algorithms you have deployed, and so on.

Obviously, this struggle for transparency will continue and impact broadly in terms of current and future legislative initiatives.

The sixth of my seven elements is with regard to the need for continuous dialogue as we persist in our efforts to tame technology.

Dialogue is not just a good, it is a necessity. As we continue to work our way forward in this new world, we need everybody on board to figure out the right way to go.

There are many actors to consider, with civil society having an invaluable role, but let me just mention one group that I think is somewhat neglected, and that is the community of national human rights institutions. Here in France, of course, I refer to la Commission nationale consultative des droits de l'homme – la CNCDH.

These bodies, everywhere, need to be part of the conversation. They are unique centres of human rights expertise in our societies. As I mentioned, some of them are assuming oversight functions as regards fundamental rights under the EU AI Act, however, they could offer so much more.

They could, for instance, help bridge the digital divide and promote digital literacy through awareness-raising campaigns. They are especially adept at reaching marginalised communities. I also see their added value in their expertise on discrimination and their ability to provide litigation advice to potential victims of technology misuse.

The seventh and the final of my elements is not about something we must do, but something we must challenge. We have to challenge the very frequently invoked argument that views such as mine will only result in the stifling of innovation and help other countries to leap well ahead us. I am far from convinced.

Research increasingly indicates that the reason Europe lags behind in innovation has very little to do with regulation and rather more to do with such features in Europe as: unharmonized and often punitive bankruptcy laws; a not so single digital single market; poorly developed capital markets; and an underdeveloped scheme to attract high-skilled labour to Europe.

It is also relevant to point out that Europe’s new regulations are far from suffocating. The AI Act adopts a risk pyramid which exempts from scrutiny a vast range of technologies because they pose a low risk. The Framework Convention leaves a very wide margin of consideration on this matter for states.

As my final argument regarding innovation point, I would invoke the trust issue. There is no doubt, and I have yet to see anybody convincingly push back against the view, that a strongly human rights compliant, human rights respectful Al, that is ultimately targeted to human thriving is going to be the most trustworthy Al. Trusted by consumers, by citizens, by everybody in our societies. I am firmly of the view that, in the long game, it is the trustworthy Al that will ultimately win out. It is for this reason that I very much welcome that France is promoting the concept of “Trust in AI” in the context of its upcoming AI Action Summit.

As I conclude, please allow me one general observation - on the need to think beyond regulation. Earlier, I made a reference to the possible dangers of human-like technology such as AGI. Some voices from science tell us that we need an absolute ban on this technology. It is premature to form a view on this call, albeit the issue needs to be followed very closely indeed. In the meantime, let’s not forget that the stronger our democracies and our rule of law the less we will need to consider bans and prohibitions – but that is a topic for another day.

Mesdames et Messieurs,

Permettez-moi de conclure par une réflexion inspirée des propos du Pape François lors de la session du G7 consacrée à l'intelligence artificielle en 2024. Il a évoqué des exemples intéressants que je souhaite utiliser pour conclure ce discours.

Il rappelait que, tout au long de l'histoire, les humains ont toujours cherché à améliorer leur environnement en créant de nouveaux outils. Nos ancêtres ont appris à utiliser le silex pour en faire des couteaux, qui pouvait aussi bien servir à découper des animaux pour fabriquer des vêtements, qu'à se combattre les uns les autres. Plus récemment, nous avons développé des technologies capables de produire de l'énergie et de l'électricité. Ces avancées, qui ont révolutionnés nos quotidiens, peuvent également menacer l'équilibre de notre planète si elles sont mal utilisées.

Ainsi, ce n'est pas toujours l'outil qui pose problème, c'est plutôt la manière avec laquelle il est utilisé.

Chers amis,

La question qui se pose à nous aujourd'hui est de savoir quelle voie nous voulons emprunter face à la révolution portée par l'intelligence artificielle. Je suis convaincu qu'en suivant le chemin des droits de l'homme, nous serons capables de l'utiliser pour en faire un outil qui améliorera de manière bénéfique notre quotidien à toutes et à tous.

Merci. Thank you.

Read more

El Mecanismo Catalán para la Prevención de la Tortura solicita alternativas a la prisión para las personas con discapacidad intelectual

Date of article: 03/02/2025

Daily News of: 05/02/2025

Country:  Spain - Catalonia

Author:

Article language: es

En 2024 ha visitado 35 equipamientos con personas privadas de libertad

El informe de 2024 incluye un monográfico sobre la atención a los internos con discapacidad intelectual en las prisiones, que constata la falta de recursos especializados para atender sus necesidades

Pide que se revise la Circular 2/2024 porque contiene restricciones en el acceso al trabajo penitenciario que pueden limitar el objetivo de reinserción de las personas privadas de libertad
 

La síndica de greuges de Cataluña, Esther Giménez-Salinas, y el adjunto general, Jaume Saura, han entregado al presidente del Parlamento, Josep Rull, el Informe del Mecanismo Catalán para la Prevención de la Tortura correspondiente al año 2024. Este informe, que recoge la actividad llevada a cabo por el Mecanismo Catalán para la Prevención de la Tortura (MCPT), es el decimocuarto que se entrega al Parlamento.

Durante 2024, se visitaron 35 centros con personas privadas de libertad o institucionalizadas. La mayor parte de las visitas (19) corresponden a comisarías de policía, tanto de Mossos d'Esquadra (7) como, sobre todo, de policías locales (12). También se visitaron ocho centros penitenciarios, seis centros educativos de justicia juvenil, un centro sociosanitario y un servicio de atención hospitalaria psiquiátrica. Como cada año, el informe contiene las fichas de todas las visitas, donde se recogen las principales observaciones y conclusiones del Equipo de Trabajo del MCPT para cada centro visitado, así como las recomendaciones pertinentes.

Consulta el mapa interactivo de los equipamientos visitados por el MCPT durante el año 2024

Personas con discapacidad en los centros penitenciarios

Como cada año, el informe contiene un monográfico, que este año trata sobre la atención a las personas con discapacidad intelectual o del desarrollo (DID) en los centros penitenciarios de Cataluña. El documento concluye que la cárcel no es un lugar adecuado para estas personas, y el MCPT pide que puedan cumplir su condena de forma alternativa: "La apuesta por el medio abierto de las personas con DID debe ser una prioridad", ha manifestado la síndica.

En este sentido, las instancias jurisdiccionales deberían derivar a estas personas hacia otros recursos más adecuados a su situación. Sin embargo, como no se hace –a menudo porque en esta fase no se detecta la discapacidad– debe afrontar el problema desde la Administración penitenciaria. Y, aunque se están haciendo esfuerzos para adaptar el entorno penitenciario a las personas con DID, todavía existen carencias que limitan la posibilidad de que reciban una atención adecuada a sus necesidades.

De hecho, el primer problema es que no existen unidades específicas para atender las necesidades de este colectivo, a excepción del Departamento de Atención Especializada de Quatre Camins, que trabaja bajo los principios de comunidad terapéutica. Esta es claramente una buena práctica penitenciaria, que convendría replicar en el resto de centros.

La falta de unidades específicas hace que, en la práctica, la mayoría de las personas con DID convivan mezcladas con otros internos vulnerables, ya sea porque sufren una patología mental, un problema de consumo o una discapacidad. En algunas ocasiones, además, la estancia en estos espacios alternativos está limitada a dos años, lo que no es adecuado para las personas con discapacidad intelectual. Otros centros penitenciarios, como el de Ponent o el de Mujeres, ni siquiera tienen previsto disponer de este espacio para atender a los internos en situación de vulnerabilidad.

Por último, el informe pone de manifiesto que las mujeres con DID sufren una doble discriminación, ya sea porque no existe un espacio específico para ellas o porque el que hay es claramente insuficiente. Además, existe un porcentaje elevado de personas con DID que no tienen el certificado oficial de discapacidad ni están detectadas como personas vulnerables. Por tanto, están ubicadas en módulos ordinarios e, incluso, pueden encontrarse en aislamiento de forma prolongada.

El trabajo penitenciario: la Circular 2/2024

A raíz del asesinato de la cocinera del Centro Penitenciario Mas d'Enric, se implementó la Circular 2/2024, que regula el procedimiento de acceso, suspensión y extinción de la relación laboral especial penitenciaria. Teniendo en cuenta que el trabajo penitenciario cumple una función reinsertadora, la síndica considera que esta nueva circular es una restricción excesiva en el acceso y en la permanencia en talleres y destinos, y pide que se revise en profundidad, por dos motivos: porque amplía notablemente los puestos de trabajo considerados de "especial riesgo para la seguridad", y porque establece nuevos requisitos para acceder a ellos que carecen de base legal, como el de no haber sido condenado por delitos violentos con resultado de muerte o lesiones graves.

El MCPT es consciente de que la seguridad del personal y de los internos de los centros penitenciarios es un elemento fundamental, pero defiende que para garantizarla no debe restringirse el acceso de los internos al trabajo penitenciario, sino ampliar las medidas de control, como la instalación de cámaras de videovigilancia o arcos metálicos, o los registros frecuentes. Además, limitar el acceso al trabajo en función del tipo de delito cometido, sin atender a la individualidad del caso, no respeta el principio de la reserva de ley, e invade competencias de ámbito estatal.

Otras recomendaciones

El informe también recomienda que las comisarías de policía local se abstengan de realizar funciones de custodia de personas detenidas. De hecho, el MCPT recomienda desde 2014 que los cuerpos de policía local que practiquen alguna detención trasladen a la persona detenida directamente al área básica policial de los Mossos d'Esquadra. Con este objetivo, se solicita la elaboración de un protocolo de actuación entre el ayuntamiento correspondiente y el Departamento de Interior que permita a los cuerpos de policía local clausurar sus áreas de custodia.

En cuanto a los centros de justicia juvenil, entre 2023 y 2024 se visitaros los siete centros educativos destinados a la ejecución de medidas judiciales de internamiento. El informe constata que, aunque estos centros cubren gran parte del territorio catalán, algunas zonas carecen de cobertura territorial, particularmente en demarcaciones como la de Tarragona, lo que puede dificultar mantener los vínculos entre los jóvenes y sus familias. El informe también detecta que los jóvenes de origen migrante sin red familiar están más desprotegidos y necesitan acompañamiento en el momento clave de preparar la salida del centro y su vida posterior. Finalmente, el informe subraya la necesidad de garantizar que el personal de seguridad privada de estos centros tenga formación específica sobre los derechos y obligaciones de los menores que residen en ellos y sobre el procedimiento y los límites de la aplicación de los medios de contención.

Acceso al contenido del informe

Read more

Sentenza del Tribunale nella causa T-743/21 | Ryanair/Commissione (TAP II; aiuto per il salvataggio; Covid-19)

Date of article: 05/02/2025

Daily News of: 05/02/2025

Country:  EUROPE

Author:

Article language: it

Link: https://curia.europa.eu/jcms/upload/docs/application/pdf/2025-02/cp250013it.pdf

Languages available: es de en fr it hu pl pt ro

COMUNICATO STAMPA n. 13/25

Lussemburgo, 5 febbraio 2025

Sentenza del Tribunale nella causa T-743/21 | Ryanair/Commissione (TAP II; aiuto per il salvataggio; Covid-19)

Il Tribunale respinge il ricorso della Ryanair contro la decisione della Commissione che ha nuovamente approvato l'aiuto per il salvataggio alla TAP nel contesto della pandemia di Covid-19

Tale decisione è stata adottata nel 2021, a seguito di una sentenza del Tribunale che aveva annullato la prima decisione della Commissione al riguardo

Nel giugno 2020 il Portogallo ha notificato alla Commissione una misura di aiuto a favore della Transportes Aéreos Portugueses SGPS (TAP SGPS), società controllante e azionista al 100% della compagnia aerea TAP Air Portugal (...)

Read more