Specific Challenge:Advantages of AI are numerous. However, the lack of transparency of AI technologies and tools complicates their acceptance by users and citizens. Ethical and secure-by-design algorithms are necessary to build trust in this technology, but a broader engagement of civil society on the values to be embedded in AI and the directions for future development is crucial. This fact is generally correct, and it becomes extremely important in the security domain. Social engagement has to be part of the overall effort to fortify our resilience across institutions, civil society and industry, and at all levels - local, national, European. There is a need to find ways to build a human-centred and socially driven AI, by, amongst other, fostering the engagement of citizens and improving their perception of security. Possible side effects of AI technological solutions in the domain of security need to be considered carefully, both from the point of view of citizens and from the point of view of Law Enforcement: e.g., their concerns regarding a strong dependence on machines, risks involved, how AI will affect their jobs and their organisation, or how AI will affect their decisions... ver más
Specific Challenge:Advantages of AI are numerous. However, the lack of transparency of AI technologies and tools complicates their acceptance by users and citizens. Ethical and secure-by-design algorithms are necessary to build trust in this technology, but a broader engagement of civil society on the values to be embedded in AI and the directions for future development is crucial. This fact is generally correct, and it becomes extremely important in the security domain. Social engagement has to be part of the overall effort to fortify our resilience across institutions, civil society and industry, and at all levels - local, national, European. There is a need to find ways to build a human-centred and socially driven AI, by, amongst other, fostering the engagement of citizens and improving their perception of security. Possible side effects of AI technological solutions in the domain of security need to be considered carefully, both from the point of view of citizens and from the point of view of Law Enforcement: e.g., their concerns regarding a strong dependence on machines, risks involved, how AI will affect their jobs and their organisation, or how AI will affect their decisions. Many open aspects exist that can be a source both of concern and of opportunity and should be addressed in a comprehensive and thorough manner. Finally, the legal dimension should be tackled as well – e.g., how the use of data to train algorithms is dealt with, what is allowed and under which circumstances, what is forbidden and when.
Scope:Proposals under this topic should provide an exhaustive analysis of human, social and organisational aspects related to the use of AI tools, including gender related aspects, in support of Law Enforcement, both for cybersecurity and in the fight against crime, including cybercrime, and terrorism. Points of view and concerns of citizens as well as of Law Enforcement should be tackled. Based on this analysis, proposals should suggest approaches that are needed to overcome these concerns and that stimulate the acceptance of AI tools by civil society and by Law Enforcement. Proposals should lead to solutions developed in compliance with European societal values, fundamental rights and applicable legislation, including in the area of privacy, protection of personal data and free movement of persons. The societal dimension should be at the core of the proposed activities. Proposals should be submitted by consortia involving relevant security practitioners, civil society organisations as well as Social Sciences and Humanities experts.
As indicated in the Introduction of this call, proposals should foresee resources for clustering activities with other projects funded under this call to identify synergies and best practices.
The Commission considers that proposals requesting a contribution from the EU of around EUR 1.5 million would allow this specific challenge to be addressed appropriately. Nonetheless, this does not preclude submission and selection of proposals requesting other amounts.
Expected Impact:Proposals should lead to:
Short term:
Effective contribution to the overall actions of this call. Medium term:
Improved and consolidated knowledge among EU Law Enforcement Agency (LEA) officers on the issues addressed in this topic;Exchange of experiences among EU LEAs about human, social and organisational aspects of the use of AI in their work;Raised awareness of civil society about benefits of AI technologies in the security domain and opportunities it brings. Longer term:
European common approach for assessing risks/threats involved by using AI in the security domain, and identifying and deploying relevant security measures that take into account legal and ethical rules of operation, fundamental rights such as the rights to privacy, to protection of personal data and free movement of persons;Advances towards the implementation of the AI tools and technologies in support of Law Enforcement, in the areas of cybersecurity and fight against crime, including cybercrime, and terrorism, by strengthening the civil society perception of the EU as an area of freedom, justice and security.
Cross-cutting Priorities:Socio-economic science and humanitiesGender
ver menos
Características del consorcio
Características del Proyecto
Características de la financiación
Información adicional de la convocatoria
Otras ventajas