Innovating Works
HORIZON-CL4-2021-HUMAN-01-24
HORIZON-CL4-2021-HUMAN-01-24: Tackling gender, race and other biases in AI (RIA)
ExpectedOutcome:Proposal results are expected to contribute to the following expected outcomes:
Sólo fondo perdido 0 €
European
This call is closed Esta línea ya está cerrada por lo que no puedes aplicar.
An upcoming call for this aid is expected, the exact start date of call is not yet clear.
Presentation: Consortium Consortium: Esta ayuda está diseñada para aplicar a ella en formato consorcio.
Minimum number of participants.
This aid finances Proyectos:

ExpectedOutcome:Proposal results are expected to contribute to the following expected outcomes:

Increased availability and deployment of unbiased and bias-preventing AI solutions across a wide range of industrial and digital sectorsAI-based solutions for enhancing digital equality and social inclusion for women and girls, and other groups at risk of discrimination, such as ethnic minorities and the LGBTIQ communityIncreased involvement of underrepresented persons in the design, development, training and deployment of AI. Increased awareness, knowledge and skills about trustworthy, bias-free and socially responsible AI in the tech industry and scientific community
Scope:Research demonstrates how bias exacerbates existing inequalities and reinforces gender, racial and other stereotypes in, for instance, the labour market, education, online advertising systems, social media, taxation and the justice system.

Bias in AI can occur in three dimensions: training data, bias in algorithms, and bias in the interpretation of the results. This topic investigates preventing and mitigating bias in AI, focusing on (1) recommender and personalisation systems, (2)... see more

ExpectedOutcome:Proposal results are expected to contribute to the following expected outcomes:

Increased availability and deployment of unbiased and bias-preventing AI solutions across a wide range of industrial and digital sectorsAI-based solutions for enhancing digital equality and social inclusion for women and girls, and other groups at risk of discrimination, such as ethnic minorities and the LGBTIQ communityIncreased involvement of underrepresented persons in the design, development, training and deployment of AI. Increased awareness, knowledge and skills about trustworthy, bias-free and socially responsible AI in the tech industry and scientific community
Scope:Research demonstrates how bias exacerbates existing inequalities and reinforces gender, racial and other stereotypes in, for instance, the labour market, education, online advertising systems, social media, taxation and the justice system.

Bias in AI can occur in three dimensions: training data, bias in algorithms, and bias in the interpretation of the results. This topic investigates preventing and mitigating bias in AI, focusing on (1) recommender and personalisation systems, (2) algorithmic decision-making, and (3) surveillance software, including facial recognition. Proposals may focus on more than one of these AI-based systems and should clearly identify the expected use-case/s in society.

Testing and assessment of AI systems with real-life data is needed to detect and reduce bias and improve accuracy, in line with the General Data Protection regulation. Assessing the fairness and social benefit[1] of AI-based systems and gaining more scientific understanding about their transparency and interpretation will be necessary to improve existing methods, and develop new ones in employment, advertising, access to health care, fraud detection, combatting online hate speech, and in general addressing bias affecting people’s ability to participate in the economy and society. This becomes particularly relevant in light of the pandemic and ongoing social justice movements, such as #MeToo and Black Lives Matter.

In line with the European Commission’ priority to strive for a ‘Union of Equality’, the European Pillar of Social Rights,[2] the Gender Equality Strategy 2020 – 2025,[3] the EU Anti-racism Action Plan 2020-2025[4], and the LGBTIQ Equality Strategy 2020-2025[5], proposals are expected to:

Develop technologies and algorithms to evaluate and address bias in AI-based systems. These underlying methods will help addressing gender, racial, age bias, as well as bias against persons with disabilities, people from socially disadvantaged backgrounds, and the LGBTIQ community in AI-based systems, and support the deployment of such bias-free AI-based solutions.Develop standardized processes to assess and quantify the trustworthiness of the developed AI systems, in particular assessment of bias, diversity, non-discrimination and intersectionality[6] – based on different types of bias measures. This might include a methodology for considering diversity and representativeness of data, ensuring the reliability, traceability, and explainability of the AI systems, testing models on various subgroups and enabling appropriate stakeholder participation.[7] It could also include mechanisms to flag and remove risks of biases and discrimination. Develop recommender and algorithmic decision-making systems which reduce bias in the selected use-caseConduct trainings and awareness raising on preventing gender and intersectional bias for AI researchers, students and practitioners in line with the Digital Education Action Plan 2021 – 2027.[8] Trainings should also target practitioners of AI as a whole to avoid that the topic be limited to those with an already existing interest in socially responsible AI. These activities should be carried out in cooperation with the Public-Private Partnership on AI, Data and Robotics[9] and other relevant initiative and projects (such as the AI-on-demand platform). Cooperate with the Public-Private Partnership on AI, Data and Robotics[9] and other relevant partnerships across a wide range of industrial and digital sectors, including representatives of international digital professional associations (e.g. IEEE), computing industry, hi-tech start-ups / SMEs etc. to further promote the use and uptake of the developed tools. Proposals should focus on the development of tools and processes for design, testing and validation, including software engineering methodologies. The proposed approaches should also build tools to support deployment and uptake, auditing, certification (where appropriate). The inclusion of underrepresented and marginalised groups in the design development, and training of the AI systems, and a transdisciplinary approach, involving multidisciplinary and intersectorial partners in the consortium will be essential.

All proposals are expected to embed mechanisms to assess and demonstrate progress towards their objectives of meeting the key requirements for removing bias (with qualitative and quantitative KPIs, demonstrators, benchmarking and progress monitoring), and share results with the European R&D community, through the AI-on-demand platform, as well as the GEAR[11] tool to maximise re-use of results and efficiency of funding. It is essential to ensure that the publicly available results from relevant EU funded research projects (e.g. SHERPA, SIENNA, Panelfit, TechEthos) are taken into account.

Activities are expected to achieve at least TRL5-6 by the end of the project

The consortia should exchange information and build synergies with the relevant projects funded under Horizon Europe, Work programme 2021-202 WIDENING PARTICIPATION AND STRENGTHENING THE EUROPEAN RESEARCH AREA[12].

All proposals are expected to allocate tasks to cohesion activities with the PPP on AI, Data and Robotics and funded actions related to this partnership, including the CSA HORIZON-CL4-2021-HUMAN-01-02.


Specific Topic Conditions:Activities are expected to start at TRL 3-4 and achieve TRL 5-6 by the end of the project – see General Annex B.




Cross-cutting Priorities:Social sciences and humanitiesArtificial IntelligenceDigital Agenda


[1]The EC-funded Expert Group on “Gendered Innovations” recommends a rigorous social benefit review: http://genderedinnovations.stanford.edu/case-studies/machinelearning.html#tabs-2. See also the policy review on ‘Gendered Innovations 2: How Inclusive Analysis contributes to Research and Innovation’ (European Commission, DG Research and Innovation, 2020) and methodologies and case studies therein dedicated to AI, addressing gender and intersectional analysis in machine learning and robotics.

[2]The European Pillar of Social Rights: https://ec.europa.eu/commission/sites/beta-political/files/social-summit-european-pillar-social-rights-booklet_en.pdf

[3]Gender Equality Strategy 2020 -2025: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0152&from=EN

[4]EU Anti-racism Action Plan 2020 – 2025 : https://ec.europa.eu/info/sites/info/files/a_union_of_equality_eu_action_plan_against_racism_2020_-2025_en.pdf

[5]https://ec.europa.eu/info/files/lgbtiq-equality-strategy-2020-2025_en

[6]Intersectionality considers how different social or political identities, such as gender, race, sexual orientation, ability, ethnicity, socio-economic background, age and religion, intersect and can result in different forms of discrimination or privilege.

[7]See ALTAI - The Assessment List on Trustworthy Artificial Intelligence: https://futurium.ec.europa.eu/en/european-ai-alliance/pages/altai-assessment-list-trustworthy-artificial-intelligence

[8]Education Action Plan 2021 -2027, p.12: “The [Ethics] Guidelines [for Trustworthy Artificial Intelligence] will be accompanied by a training programme for researchers and students on the ethical aspects of AI and include a target of 45% of female participation in the training activities” https://ec.europa.eu/education/sites/education/files/document-library-docs/deap-communication-sept2020_en.pdf

[9]https://ai-data-robotics-partnership.eu/

[10]https://ai-data-robotics-partnership.eu/

[11]https://eige.europa.eu/gender-mainstreaming/toolkits/gear

[12]In particular: HORIZON-WIDERA-2021-ERA-01-91. ENSURING RELIABILITY AND TRUST IN QUALITY OF RESEARCH ETHICS EXPERTISE IN THE CONTEXT OF NEW/EMERGING TECHNOLOGIES.

see less

Temáticas Obligatorias del proyecto: Temática principal: Computer sciences information science and bioinfo Artificial intelligence intelligent systems mult Social sciences and humanities Gender in computer sciences Artificial Intelligence Digital Agenda Machine learning statistical data processing and

Consortium characteristics

Scope European : The aid is European, you can apply to this line any company that is part of the European Community.
Tipo y tamaño de organizaciones: The necessary consortium design for the processing of this aid needs:

characteristics of the Proyecto

Requisitos de diseño por participante : Duración:
Requisitos técnicos: ExpectedOutcome:Proposal results are expected to contribute to the following expected outcomes: ExpectedOutcome:Proposal results are expected to contribute to the following expected outcomes:
Do you want examples? Puedes consultar aquí los últimos proyectos conocidos financiados por esta línea, sus tecnologías, sus presupuestos y sus compañías.
Financial Chapters: The chapters of financing expenses for this line are:
Personnel costs.
Expenses related to personnel working directly on the project are based on actual hours spent, based on company costs, and fixed ratios for certain employees, such as the company's owners.
Subcontracting costs.
Payments to external third parties to perform specific tasks that cannot be performed by the project beneficiaries.
Purchase costs.
They include the acquisition of equipment, amortization, material, licenses or other goods and services necessary for the execution of the project
Other cost categories.
Miscellaneous expenses such as financial costs, audit certificates or participation in events not covered by other categories
Indirect costs.
Overhead costs not directly assignable to the project (such as electricity, rent, or office space), calculated as a fixed 25% of eligible direct costs (excluding subcontracting).
Madurez tecnológica: The processing of this aid requires a minimum technological level in the project of TRL 4:. Los componentes que integran determinado proyecto de innovación han sido identificados y se busca establecer si dichos componentes individuales cuentan con las capacidades para actuar de manera integrada, funcionando conjuntamente en un sistema. + info.
TRL esperado:

Characteristics of financing

Intensidad de la ayuda: Sólo fondo perdido + info
Lost Fund:
0% 25% 50% 75% 100%
For the eligible budget, the intensity of the aid in the form of a lost fund may reach as minimum a 100%.
The funding rate for RIA projects is 100 % of the eligible costs for all types of organizations. The funding rate for RIA projects is 100 % of the eligible costs for all types of organizations.
Guarantees:
does not require guarantees
No existen condiciones financieras para el beneficiario.

Additional information about the call

incentive effect: Esta ayuda no tiene efecto incentivador. + info.
Respuesta Organismo: Se calcula que aproximadamente, la respuesta del organismo una vez tramitada la ayuda es de:
Meses de respuesta:
Muy Competitiva:
non -competitive competitive Very competitive
We do not know the total budget of the line
minimis: Esta línea de financiación NO considera una “ayuda de minimis”. You can consult the regulations here.

other advantages

SME seal: Tramitar esta ayuda con éxito permite conseguir el sello de calidad de “sello pyme innovadora”. Que permite ciertas ventajas fiscales.
HORIZON-CL4-2021-HUMAN-01 Tackling gender, race and other biases in AI (RIA) ExpectedOutcome:Proposal results are expected to contribute to the following expected outcomes: Increased availability and deployment of un...
Sin info.
HORIZON-CL4-2021-HUMAN-01-20 Support for transnational activities of National Contact Points in the thematic area of Space (CSA)
en consorcio: ExpectedOutcome:Projects are expected to contribute to the following outcomes: Improved and professionalised NCP services across Europe, in...
Cerrada does 9 months | next call scheduled for the month of
HORIZON-CL4-2021-HUMAN-01-20 Trust & data sovereignty on the Internet (RIA)
en consorcio:
Cerrada does 9 months | next call scheduled for the month of
HORIZON-CL4-2021-HUMAN-01-20 Trust & data sovereignty on the Internet (RIA)
en consorcio:
Cerrada does 9 months | next call scheduled for the month of
HORIZON-CL4-2021-HUMAN-01-20 Trust & data sovereignty on the Internet (RIA)
en consorcio:
Cerrada does 9 months | next call scheduled for the month of
HORIZON-CL4-2021-HUMAN-01-20 NGI International Collaboration - Transatlantic fellowship programme (CSA)
Cerrada does 9 months | next call scheduled for the month of