Innovating Works
SU-AI02-2020
SU-AI02-2020: Secure and resilient Artificial Intelligence technologies, tools and solutions in support of Law Enforcement and citizen protection, cybersecurity operations and prevention and protection aga
Specific Challenge:The increasing complexity of security challenges, as well as more and more frequent use of AI in multiple security domains, such as fight against crime, including cybercrime and terrorism, cybersecurity (re-)actions, protection of public spaces and critical infrastructure makes the security dimension of AI a matter of priority. Research is needed to assess how to mostly benefit from the AI based technologies in enhancing EU’s resilience against newly emerging security threats (both “classical” and new AI supported) and in reinforcing the capacity of the Law Enforcement Agencies (LEAs) at national and at EU level to identify and successfully counter those threats. In addition, in security research, data quality, integrity, quantity, availability, origin, storage and other related challenges are critical, especially in the EU-wide context. To this end, a complex set of coordinated developments is required, by different actors, at the legislative, technology and Law Enforcement levels. For AI made in Europe, three key principles are: "interoperability", “security by design” and “ethics by design”. Therefore, potential ethical and legal implications have to be adequately addressed so that developed AI systems are trustworthy, accountable, responsible and transparent, in accordance with existing ethical frameworks and guidelines that are compatible with the EU principles and regulations.[1]
Sólo fondo perdido 17M €
Europeo
Esta convocatoria está cerrada Esta línea ya está cerrada por lo que no puedes aplicar. Cerró el pasado día 27-08-2020.
Se espera una próxima convocatoria para esta ayuda, aún no está clara la fecha exacta de inicio de convocatoria.
Por suerte, hemos conseguido la lista de proyectos financiados!
Presentación: Consorcio Consorcio: Esta ayuda está diseñada para aplicar a ella en formato consorcio.
Número mínimo de participantes.
Esta ayuda financia Proyectos: Objetivo del proyecto:

Specific Challenge:The increasing complexity of security challenges, as well as more and more frequent use of AI in multiple security domains, such as fight against crime, including cybercrime and terrorism, cybersecurity (re-)actions, protection of public spaces and critical infrastructure makes the security dimension of AI a matter of priority. Research is needed to assess how to mostly benefit from the AI based technologies in enhancing EU’s resilience against newly emerging security threats (both “classical” and new AI supported) and in reinforcing the capacity of the Law Enforcement Agencies (LEAs) at national and at EU level to identify and successfully counter those threats. In addition, in security research, data quality, integrity, quantity, availability, origin, storage and other related challenges are critical, especially in the EU-wide context. To this end, a complex set of coordinated developments is required, by different actors, at the legislative, technology and Law Enforcement levels. For AI made in Europe, three key principles are: "interoperability", “security by design” and “ethics by design”. Therefore, potential ethical and legal implications have to... ver más

Specific Challenge:The increasing complexity of security challenges, as well as more and more frequent use of AI in multiple security domains, such as fight against crime, including cybercrime and terrorism, cybersecurity (re-)actions, protection of public spaces and critical infrastructure makes the security dimension of AI a matter of priority. Research is needed to assess how to mostly benefit from the AI based technologies in enhancing EU’s resilience against newly emerging security threats (both “classical” and new AI supported) and in reinforcing the capacity of the Law Enforcement Agencies (LEAs) at national and at EU level to identify and successfully counter those threats. In addition, in security research, data quality, integrity, quantity, availability, origin, storage and other related challenges are critical, especially in the EU-wide context. To this end, a complex set of coordinated developments is required, by different actors, at the legislative, technology and Law Enforcement levels. For AI made in Europe, three key principles are: "interoperability", “security by design” and “ethics by design”. Therefore, potential ethical and legal implications have to be adequately addressed so that developed AI systems are trustworthy, accountable, responsible and transparent, in accordance with existing ethical frameworks and guidelines that are compatible with the EU principles and regulations.[1]


Scope:Proposals under this topic should aim at exploring use of AI in the security dimension at and beyond the state-of-the-art, and exploiting its potential to support LEAs in their effective operational cooperation and in the investigation of traditional forms of crime where digital content plays a key role, as well as of cyber-dependent and cyber-enabled crimes. On the one hand, as indicated in “Artificial Intelligence – A European Perspective”, AI systems are being and will increasingly be used by cyber criminals, so research into their capabilities and weaknesses will play a crucial part in defending against such malicious usage. On the other hand, Law Enforcement will increasingly engage in active usage of AI systems to reinforce investigative capabilities, to strengthen digital evidence-making in court and to cooperate effectively with relevant LEAs. Consequently, proposals should:

develop AI tools and solutions in support of LEAs daily work. This should include combined hardware and software solutions such as robotics or Natural Language Processing, in support of LEAs to better prevent, detect and investigate criminal activities and terrorism and monitor borders, i.e., opportunities and benefits of AI tools and solutions in support of the work of Law Enforcement and to strengthen their operational cooperation. Building on existing best practices such as those obtained through the ASGARD project [2], proposals should establish a platform of easy-to-integrate and interoperable AI tools and an associated process with short research and testing cycles, which will serve in the short term perspective as a basis for identifying specific gaps that would require further reflection and development. This platform should, in the end, result in a sustainable AI community for LEAs, researchers and industry as well as a specific environment where relevant AI tools would be tailored to specific needs of the security sector, including the requirements of LEAs. Those AI tools would be developed in a timely manner using an iterative approach to define, develop and assess the most pertinent digital tools with a constant participation of end-users throughout the project. By the end of the project, the platform should also enable a direct access for Law Enforcement to an initial set of tools. Specific consideration should be given to the issue of setting an appropriate mechanism to enable a proper access to the relevant data necessary to develop and train AI based systems for security.

Proposals should also:

develop cybersecurity tools and solutions for the protection of AI based technologies in use or to be used by LEAs, including those developed under this project against manipulation, cyber threats and attacks, and;exploit AI technologies for cybersecurity operation purposes of Law Enforcement infrastructures, including the prevention, detection and response of cybersecurity incidents through advanced threat intelligence and predictive analytics technologies and tools targeting Cybercrime units of LEAs, Computer Security Incident Response Teams (CSIRTs) of LEAs, Police and Customs Cooperation Centers (PCCCs), Joint Investigation Teams. Finally, in order to have the full picture of all AI-related issues in the domain of work of Law Enforcement and citizen protection, proposals should:

tackle the fundamental dual nature of AI tools, techniques and systems, i.e.: resilience against adversarial AI, and prevention and protection against malicious use of AI (including malicious use of the LEA AI tools developed under this project) for criminal activities or terrorism. The improvement of research results, application and uptake should be taken into consideration.

The functionality of existing EU LEAs' tools and systems needs to be analysed since they need to support the prevention, reaction and detection of cyber threats and security incidents.

Furthermore, the accuracy of AI tools depends on the quantity and on the quality of the training and testing data, including the quality of their structure and labelling, and how well these data represent the problem to be tackled. In the security domain, this issue is further emphasized due to the sensitivity of the data, which complicates the access to real multilingual datasets and the creation of representative datasets. A huge amount of up-to-date high-quality data needed to develop reliable AI tools in support of Law Enforcement, in the areas of cybersecurity and of the fight against crime, including cybercrime and terrorism, asks for the development of training/testing datasets at a European level. This requires a close cooperation of different national Law Enforcement and judiciary systems. Namely, training and testing data sets considered legal and used in one country have to be shared and accepted in another one, while simultaneously observing fundamental rights and substantial or procedural safeguards. The lack of legislation at the national and international level makes this particularly difficult. The availability of such datasets to the scientific community would ensure future advances in the field.

Thus, in order to address the problem of securing European up-to-date high-quality training and testing data sets in the domain of AI in support of Law Enforcement, proposals under this topic should, from a multidisciplinary point of view, identify, assess and articulate the whole set of actions that should be carried out in a coherent framework:

A comparative analysis of existing legal provisions throughout Europe that apply in these cases and their impact, including obstacles for research community to access datasets used by LEAs and means of overcoming these obstacles;The identification and definition of legislative changes that could be promoted both at the European and Member State level;Ethical and operational implications for LEAs;The identification of the technical developments that should be carried out to sustain all these aspects;Determination of legal and ethical means at the European level that allow for a creation of European up-to-date, representative and large enough high-quality training and testing data sets for AI, in support of Law Enforcement and available to the scientific community working with LEAs. Proposals should have a clear dissemination plan, ensuring the uptake of project results by LEAs in their daily work.

Taking into account the European dimension of the topic, the role of EU agencies supporting Law Enforcement should be exploited regarding:

effective channels established between industry and LEAs, closing the gap between public investment and uptake of project results by relevant end-users in their daily work;increased exchange of experiences, best practices and lessons learnt throughout Europe leading to EU common approaches for opportunity/risk assessment of AI;better understanding and readiness of policy makers on future trends in AI;enhanced cooperative operations and synergies between EU LEAs. Proposals should take into account the existing EU and national projects in this field, as well as build on existing research and articulate a legal, ethical and practical framework to take the best out of the AI based technologies, systems and solutions in the security dimension. Whenever appropriate, the work should complement, build on available resources and contribute to common efforts such as (but not limited to) ASGARD, SIRIUS[3], EPE[4], networks of practitioners [5], AI4EU[6], or activities carried out in the LEIT programme, namely in Robotics[7], Big Data[8], and IoT[9]. As proposals will leverage existing technologies (open source or not), they should show sufficient triage of these technologies to ensure no internalisation of Intellectual Property Rights or security risks as well as demonstrate that such technologies come with adequate license and freedom to operate.

As far as the societal dimension is concerned, proposed solutions of AI applications should respond to the needs of an individual and society as a whole by building and retaining trust. Proposals should analyse the societal implications of AI and its impacts on democracy. Therefore, the values guiding AI and responsible design practices that encode these values into AI systems should also be critically assessed. It should be also shown that the testing of the tools represents well the reality. In addition, AI tools should be unbiased (gender, racial, etc.) and designed in such a way that the transparency and explainability of the corresponding decision processes are ensured, which would, amongst other, reinforce the admissibility of any resulting evidence in court.

Proposals’ consortia should comprehend, besides industrial and research participants, relevant security practitioners, civil society organisations, experts on criminal procedure from a variety of European Member States and Associated Countries as well as LEAs. Proposals should ensure a multidisciplinary approach and have the appropriate balance of IT specialists as well as Social Sciences and Humanities experts.

As indicated in the Introduction of this call, proposals should foresee resources for clustering activities with other projects funded under this call to identify synergies and best practices.

The Commission considers that proposals requesting a contribution from the EU of around EUR 17 million would allow this specific challenge to be addressed appropriately. Nonetheless, this does not preclude submission and selection of proposals requesting other amounts.


Expected Impact:Proposals should lead to:

Short term:

Effective contribution to the overall actions of this call;Development of a European representative and large enough high-quality multilingual and multimodal training and testing dataset available to the scientific community that is developing AI tools in support of Law Enforcement;EU common approach to AI in support of LEAs, centralized efforts as well as solutions on, e.g., the issue of huge amount of data needed for AI. Medium term:

Improved capabilities for LEAs to conduct investigations and analysis using AI, such as a specific environment/platform where relevant AI tools would be tailored to specific needs of the security sector including the requirements of LEAs;Ameliorated protection and robustness of AI based technologies against cyber threats and attacks;Raised awareness and understanding of all relevant issues at the European as well as national level, related to the cooperation of the scientific community and Law Enforcement in the domain of cybersecurity and the fight against crime, including cybercrime and terrorism regarding the availability of the representative data needed to develop accurate AI tools;Raised awareness of the EU political stakeholders in order to help them to shape a proper legal environment for such activities at EU level and to demonstrate the added value of common practices and standards;Increased resilience to adversarial AI. Longer term:

Improved capabilities for trans-border LEA data exchange and collaboration;Modernisation of work of LEAs in Europe and improvement of their cooperation with other modern LEAs worldwide;A European, common tactical and human-centric approach to AI tools, techniques and systems for fighting crime and improving cybersecurity in support of Law Enforcement, in full compliance with applicable legislation and ethical considerations;Fostering of the possible future establishment of a European AI hub in support of Law Enforcement, taking into account the activities of the AI-on-demand platform;Making a significant contribution to the establishment of a strong supply industry in this sector in Europe and thus enhancing the EU’s strategic autonomy in the field of AI applications for Law Enforcement;Creation of a unified European legal and ethical environment for the sustainability of the up-to-date, representative and high-quality training and testing datasets needed for AI in support of Law Enforcement; as well as for the availability of these datasets to the scientific community working on these tools;Development of EU standards in this domain. The outcome of the proposal is expected to lead to development from Technology Readiness Levels (TRL) 7-8; please see part G of the General Annexes.


Cross-cutting Priorities:GenderSocio-economic science and humanities


[1]Special focus should be put on verifying the compatibility with:(1) Guidelines of the European Group on Ethics in Science and New Technologies (regulatory framework to be ready in March 2019), (2) General Data Protection Regulation (GDPR).

[2]ASGARD project - (http://www.asgard-project.eu/) aims to contribute to LEA Technological Autonomy, by building a sustainable, long-lasting community for LEAs and the R&D industry. This community will develop, maintain and evolve a best-of-class tool set for the extraction, fusion, exchange and analysis of Big Data, including cyber-offense data for forensic investigation. ASGARD helps LEAs significantly increase their analytical capabilities.

[3]SIRIUS, launched by Europol in October 2017, is a secure web platform for law enforcement professionals in internet-facilitated crime investigations, with a special focus on counter-terrorism.

[4]EPE (Europol Platform for Experts) is a secure, collaborative web platform for specialists in a variety of law enforcement areas.

[5]Such as ILEAnet (https://www.ileanet.eu/) and I-LEAD (i-lead.eu/)

[6]developing the AI-on-demand platform, central access point to AI resources and tools: http://ai4eu.org/.

[7]For instance exploiting technology developed in H2020 robotics projects in Search and Rescue, support to civil protection, or inspection and maintenance - https://eu-robotics.net/sparc/

[8]http://www.bdva.eu/ppp-projects - such as AEGIS, Lynx or FANDANGO.

[9]MONICA, SecureIoT.

ver menos

Temáticas Obligatorias del proyecto: Temática principal:

Características del consorcio

Ámbito Europeo : La ayuda es de ámbito europeo, puede aplicar a esta linea cualquier empresa que forme parte de la Comunidad Europea.
Tipo y tamaño de organizaciones: El diseño de consorcio necesario para la tramitación de esta ayuda necesita de:

Características del Proyecto

Requisitos de diseño: Duración:
Requisitos técnicos: Specific Challenge:The increasing complexity of security challenges, as well as more and more frequent use of AI in multiple security domains, such as fight against crime, including cybercrime and terrorism, cybersecurity (re-)actions, protection of public spaces and critical infrastructure makes the security dimension of AI a matter of priority. Research is needed to assess how to mostly benefit from the AI based technologies in enhancing EU’s resilience against newly emerging security threats (both “classical” and new AI supported) and in reinforcing the capacity of the Law Enforcement Agencies (LEAs) at national and at EU level to identify and successfully counter those threats. In addition, in security research, data quality, integrity, quantity, availability, origin, storage and other related challenges are critical, especially in the EU-wide context. To this end, a complex set of coordinated developments is required, by different actors, at the legislative, technology and Law Enforcement levels. For AI made in Europe, three key principles are: "interoperability", “security by design” and “ethics by design”. Therefore, potential ethical and legal implications have to be adequately addressed so that developed AI systems are trustworthy, accountable, responsible and transparent, in accordance with existing ethical frameworks and guidelines that are compatible with the EU principles and regulations.[1] Specific Challenge:The increasing complexity of security challenges, as well as more and more frequent use of AI in multiple security domains, such as fight against crime, including cybercrime and terrorism, cybersecurity (re-)actions, protection of public spaces and critical infrastructure makes the security dimension of AI a matter of priority. Research is needed to assess how to mostly benefit from the AI based technologies in enhancing EU’s resilience against newly emerging security threats (both “classical” and new AI supported) and in reinforcing the capacity of the Law Enforcement Agencies (LEAs) at national and at EU level to identify and successfully counter those threats. In addition, in security research, data quality, integrity, quantity, availability, origin, storage and other related challenges are critical, especially in the EU-wide context. To this end, a complex set of coordinated developments is required, by different actors, at the legislative, technology and Law Enforcement levels. For AI made in Europe, three key principles are: "interoperability", “security by design” and “ethics by design”. Therefore, potential ethical and legal implications have to be adequately addressed so that developed AI systems are trustworthy, accountable, responsible and transparent, in accordance with existing ethical frameworks and guidelines that are compatible with the EU principles and regulations.[1]
¿Quieres ejemplos? Puedes consultar aquí los últimos proyectos conocidos financiados por esta línea, sus tecnologías, sus presupuestos y sus compañías.
Capítulos financiables: Los capítulos de gastos financiables para esta línea son:
Personnel costs.
Los costes de personal subvencionables cubren las horas de trabajo efectivo de las personas directamente dedicadas a la ejecución de la acción. Los propietarios de pequeñas y medianas empresas que no perciban salario y otras personas físicas que no perciban salario podrán imputar los costes de personal sobre la base de una escala de costes unitarios
Purchase costs.
Los otros costes directos se dividen en los siguientes apartados: Viajes, amortizaciones, equipamiento y otros bienes y servicios. Se financia la amortización de equipos, permitiendo incluir la amortización de equipos adquiridos antes del proyecto si se registra durante su ejecución. En el apartado de otros bienes y servicios se incluyen los diferentes bienes y servicios comprados por los beneficiarios a proveedores externos para poder llevar a cabo sus tareas
Subcontracting costs.
La subcontratación en ayudas europeas no debe tratarse del core de actividades de I+D del proyecto. El contratista debe ser seleccionado por el beneficiario de acuerdo con el principio de mejor relación calidad-precio bajo las condiciones de transparencia e igualdad (en ningún caso consistirá en solicitar menos de 3 ofertas). En el caso de entidades públicas, para la subcontratación se deberán de seguir las leyes que rijan en el país al que pertenezca el contratante
Amortizaciones.
Activos.
Otros Gastos.
Madurez tecnológica: La tramitación de esta ayuda requiere de un nivel tecnológico mínimo en el proyecto de TRL 5:. Los elementos básicos de la innovación son integrados de manera que la configuración final es similar a su aplicación final, es decir que está listo para ser usado en la simulación de un entorno real. Se mejoran los modelos tanto técnicos como económicos del diseño inicial, se ha identificado adicionalmente aspectos de seguridad, limitaciones ambiéntales y/o regulatorios entre otros. + info.
TRL esperado:

Características de la financiación

Intensidad de la ayuda: Sólo fondo perdido + info
Fondo perdido:
0% 25% 50% 75% 100%
1. Eligible countries: described in Annex A of the Work Programme.
A number of non-EU/non-Associated Countries that are not automatically eligible for funding have made specific provisions for making funding available for their participants in Horizon 2020 projects. See the information in the Online Manual.
 
2. Eligibility and admissibility conditions: described in Annex B and Annex C of the Work Programme.  
 
This topic requires the active involvement of at least 5 Law Enforcement Agencies (LEAs) from at least 5 different EU or Associated countries.
The duration of the proposed activity must not exceed 60 months.
Proposal page limits and layout: please refer to Part B of the proposal template in the submission system below.
 
3. Evaluation:
Evaluation criteria, scoring and thresholds are described in Annex H of the Work Programme. 
Submission and evaluation processes are described in the Online Manual.
 
4. Indicative time for evaluation and grant agreements:
Information on the outcome of evaluation (single-stage call): maximum 5 months from the deadline for submission.
Signature of grant agreements: maximum 8 months from the deadline for submission.
 
5. Proposal templates, evaluation forms and model grant agreements (MGA):
Innovation Action:
Specific provisions and funding rates
Standard proposal tem...
1. Eligible countries: described in Annex A of the Work Programme.
A number of non-EU/non-Associated Countries that are not automatically eligible for funding have made specific provisions for making funding available for their participants in Horizon 2020 projects. See the information in the Online Manual.
 
2. Eligibility and admissibility conditions: described in Annex B and Annex C of the Work Programme.  
 
This topic requires the active involvement of at least 5 Law Enforcement Agencies (LEAs) from at least 5 different EU or Associated countries.
The duration of the proposed activity must not exceed 60 months.
Proposal page limits and layout: please refer to Part B of the proposal template in the submission system below.
 
3. Evaluation:
Evaluation criteria, scoring and thresholds are described in Annex H of the Work Programme. 
Submission and evaluation processes are described in the Online Manual.
 
4. Indicative time for evaluation and grant agreements:
Information on the outcome of evaluation (single-stage call): maximum 5 months from the deadline for submission.
Signature of grant agreements: maximum 8 months from the deadline for submission.
 
5. Proposal templates, evaluation forms and model grant agreements (MGA):
Innovation Action:
Specific provisions and funding rates
Standard proposal template
Standard evaluation form
General MGA - Multi-Beneficiary
Annotated Grant Agreement
 
6. Additional provisions:
Horizon 2020 budget flexibility
Classified information
Technology readiness levels (TRL) – where a topic description refers to TRL, these definitions apply
For grants awarded under this topic the beneficiaries must grant access rights for EU institutions, bodies, offices or agencies and Member States under the special conditions for the Specific Objective ‘Secure societies - Protecting freedom and security of Europe and its citizens’. The respective option of Article 31.5 of the Model Grant Agreement will be applied.
Members of consortium are required to conclude a consortium agreement, in principle prior to the signature of the grant agreement.
Grants awarded under the Artificial Intelligence call will be implemented through the use of complementary grants.
The respective options of Article 2, Article 31.6 and Article 41.4 of the Model Grant Agreement will be applied.
For grants awarded under these topics the Commission or Agency may object to a transfer of ownership or the exclusive licensing of results to a third party established in a third country not associated to Horizon 2020. The respective option of Article 30.3 of the Model Grant Agreement will be applied.
 
7. Open access must be granted to all scientific publications resulting from Horizon 2020 actions.
Where relevant, proposals should also provide information on how the participants will manage the research data generated and/or collected during the project, such as details on what types of data the project will generate, whether and how this data will be exploited or made accessible for verification and re-use, and how it will be curated and preserved.
Open access to research data
The Open Research Data Pilot has been extended to cover all Horizon 2020 topics for which the submission is opened on 26 July 2016 or later. Projects funded under this topic will therefore by default provide open access to the research data they generate, except if they decide to opt-out under the conditions described in Annex L of the Work Programme. Projects can opt-out at any stage, that is both before and after the grant signature.
Note that the evaluation phase proposals will not be evaluated more favourably because they plan to open or share their data, and will not be penalised for opting out.
Open research data sharing applies to the data needed to validate the results presented in scientific publications. Additionally, projects can choose to make other data available open access and need to describe their approach in a Data Management Plan.
Projects need to create a Data Management Plan (DMP), except if they opt-out of making their research data open access. A first version of the DMP must be provided as an early deliverable within six months of the project and should be updated during the project as appropriate. The Commission already provides guidance documents, including a template for DMPs. See the Online Manual.
Eligibility of costs: costs related to data management and data sharing are eligible for reimbursement during the project duration.
The legal requirements for projects participating in this pilot are in the article 29.3 of the Model Grant Agreement.
 
8. Additional documents:
1. Introduction WP 2018-20
14. Secure societies – protecting freedom and security of Europe and its citizens WP 2018-20
General annexes to the Work Programme 2018-2020
 
Legal basis: Horizon 2020 Regulation of Establishment
Legal basis: Horizon 2020 Rules for Participation
Legal basis: Horizon 2020 Specific Programme
Garantías:
No exige Garantías
No existen condiciones financieras para el beneficiario.

Información adicional de la convocatoria

Efecto incentivador: Esta ayuda tiene efecto incentivador, por lo que el proyecto no puede haberse iniciado antes de la presentación de la solicitud de ayuda. + info.
Respuesta Organismo: Se calcula que aproximadamente, la respuesta del organismo una vez tramitada la ayuda es de:
Meses de respuesta:
Muy Competitiva:
No Competitiva Competitiva Muy Competitiva
El presupuesto total de la convocatoria asciende a
Presupuesto total de la convocatoria.
Minimis: Esta línea de financiación NO considera una “ayuda de minimis”. Puedes consultar la normativa aquí.

Otras ventajas

Sello PYME: Tramitar esta ayuda con éxito permite conseguir el sello de calidad de “sello pyme innovadora”. Que permite ciertas ventajas fiscales.
H2020-SU-AI-2020 Secure and resilient Artificial Intelligence technologies, tools and solutions in support of Law Enforcement and citizen protection, cybersecurity operations and prevention and protection aga Specific Challenge:The increasing complexity of security challenges, as well as more and more frequent use of AI in multiple security domain...
Sin info.
SU-AI02-2020 Secure and resilient Artificial Intelligence technologies, tools and solutions in support of Law Enforcement and citizen protection, cybersecurity operations and prevention and protection aga
en consorcio: Specific Challenge:The increasing complexity of security challenges, as well as more and more frequent use of AI in multiple security domain...
Cerrada hace 4 años | Próxima convocatoria prevista para el mes de
SU-AI01-2020 Developing a research roadmap regarding Artificial Intelligence in support of Law Enforcement
en consorcio: Specific Challenge:As indicated in the Coordinated Plan on Artificial Intelligence and in the Cybersecurity Joint Communication [1], there i...
Cerrada hace 4 años | Próxima convocatoria prevista para el mes de
SU-AI03-2020 Human factors, and ethical, societal, legal and organisational aspects of using Artificial Intelligence in support of Law Enforcement
en consorcio: Specific Challenge:Advantages of AI are numerous. However, the lack of transparency of AI technologies and tools complicates their acceptanc...
Cerrada hace 4 años | Próxima convocatoria prevista para el mes de