ExpectedOutcome:Proposal results are expected to contribute to the following expected outcomes:
Increased availability and deployment of unbiased and bias-preventing AI solutions across a wide range of industrial and digital sectorsAI-based solutions for enhancing digital equality and social inclusion for women and girls, and other groups at risk of discrimination, such as ethnic minorities and the LGBTIQ communityIncreased involvement of underrepresented persons in the design, development, training and deployment of AI. Increased awareness, knowledge and skills about trustworthy, bias-free and socially responsible AI in the tech industry and scientific community
Scope:Research demonstrates how bias exacerbates existing inequalities and reinforces gender, racial and other stereotypes in, for instance, the labour market, education, online advertising systems, social media, taxation and the justice system.
Bias in AI can occur in three dimensions: training data, bias in algorithms, and bias in the interpretation of the results. This topic investigates preventing and mitigating bias in AI, focusing on (1) recommender and personalisation systems, (2)...
ver más
ExpectedOutcome:Proposal results are expected to contribute to the following expected outcomes:
Increased availability and deployment of unbiased and bias-preventing AI solutions across a wide range of industrial and digital sectorsAI-based solutions for enhancing digital equality and social inclusion for women and girls, and other groups at risk of discrimination, such as ethnic minorities and the LGBTIQ communityIncreased involvement of underrepresented persons in the design, development, training and deployment of AI. Increased awareness, knowledge and skills about trustworthy, bias-free and socially responsible AI in the tech industry and scientific community
Scope:Research demonstrates how bias exacerbates existing inequalities and reinforces gender, racial and other stereotypes in, for instance, the labour market, education, online advertising systems, social media, taxation and the justice system.
Bias in AI can occur in three dimensions: training data, bias in algorithms, and bias in the interpretation of the results. This topic investigates preventing and mitigating bias in AI, focusing on (1) recommender and personalisation systems, (2) algorithmic decision-making, and (3) surveillance software, including facial recognition. Proposals may focus on more than one of these AI-based systems and should clearly identify the expected use-case/s in society.
Testing and assessment of AI systems with real-life data is needed to detect and reduce bias and improve accuracy, in line with the General Data Protection regulation. Assessing the fairness and social benefit[1] of AI-based systems and gaining more scientific understanding about their transparency and interpretation will be necessary to improve existing methods, and develop new ones in employment, advertising, access to health care, fraud detection, combatting online hate speech, and in general addressing bias affecting people’s ability to participate in the economy and society. This becomes particularly relevant in light of the pandemic and ongoing social justice movements, such as #MeToo and Black Lives Matter.
In line with the European Commission’ priority to strive for a ‘Union of Equality’, the European Pillar of Social Rights,[2] the Gender Equality Strategy 2020 – 2025,[3] the EU Anti-racism Action Plan 2020-2025[4], and the LGBTIQ Equality Strategy 2020-2025[5], proposals are expected to:
Develop technologies and algorithms to evaluate and address bias in AI-based systems. These underlying methods will help addressing gender, racial, age bias, as well as bias against persons with disabilities, people from socially disadvantaged backgrounds, and the LGBTIQ community in AI-based systems, and support the deployment of such bias-free AI-based solutions.Develop standardized processes to assess and quantify the trustworthiness of the developed AI systems, in particular assessment of bias, diversity, non-discrimination and intersectionality[6] – based on different types of bias measures. This might include a methodology for considering diversity and representativeness of data, ensuring the reliability, traceability, and explainability of the AI systems, testing models on various subgroups and enabling appropriate stakeholder participation.[7] It could also include mechanisms to flag and remove risks of biases and discrimination. Develop recommender and algorithmic decision-making systems which reduce bias in the selected use-caseConduct trainings and awareness raising on preventing gender and intersectional bias for AI researchers, students and practitioners in line with the Digital Education Action Plan 2021 – 2027.[8] Trainings should also target practitioners of AI as a whole to avoid that the topic be limited to those with an already existing interest in socially responsible AI. These activities should be carried out in cooperation with the Public-Private Partnership on AI, Data and Robotics[9] and other relevant initiative and projects (such as the AI-on-demand platform). Cooperate with the Public-Private Partnership on AI, Data and Robotics[9] and other relevant partnerships across a wide range of industrial and digital sectors, including representatives of international digital professional associations (e.g. IEEE), computing industry, hi-tech start-ups / SMEs etc. to further promote the use and uptake of the developed tools. Proposals should focus on the development of tools and processes for design, testing and validation, including software engineering methodologies. The proposed approaches should also build tools to support deployment and uptake, auditing, certification (where appropriate). The inclusion of underrepresented and marginalised groups in the design development, and training of the AI systems, and a transdisciplinary approach, involving multidisciplinary and intersectorial partners in the consortium will be essential.
All proposals are expected to embed mechanisms to assess and demonstrate progress towards their objectives of meeting the key requirements for removing bias (with qualitative and quantitative KPIs, demonstrators, benchmarking and progress monitoring), and share results with the European R&D community, through the AI-on-demand platform, as well as the GEAR[11] tool to maximise re-use of results and efficiency of funding. It is essential to ensure that the publicly available results from relevant EU funded research projects (e.g. SHERPA, SIENNA, Panelfit, TechEthos) are taken into account.
Activities are expected to achieve at least TRL5-6 by the end of the project
The consortia should exchange information and build synergies with the relevant projects funded under Horizon Europe, Work programme 2021-202 WIDENING PARTICIPATION AND STRENGTHENING THE EUROPEAN RESEARCH AREA[12].
All proposals are expected to allocate tasks to cohesion activities with the PPP on AI, Data and Robotics and funded actions related to this partnership, including the CSA HORIZON-CL4-2021-HUMAN-01-02.
Specific Topic Conditions:Activities are expected to start at TRL 3-4 and achieve TRL 5-6 by the end of the project – see General Annex B.
Cross-cutting Priorities:Social sciences and humanitiesArtificial IntelligenceDigital Agenda
[1]The EC-funded Expert Group on “Gendered Innovations” recommends a rigorous social benefit review: http://genderedinnovations.stanford.edu/case-studies/machinelearning.html#tabs-2. See also the policy review on ‘Gendered Innovations 2: How Inclusive Analysis contributes to Research and Innovation’ (European Commission, DG Research and Innovation, 2020) and methodologies and case studies therein dedicated to AI, addressing gender and intersectional analysis in machine learning and robotics.
[2]The European Pillar of Social Rights: https://ec.europa.eu/commission/sites/beta-political/files/social-summit-european-pillar-social-rights-booklet_en.pdf
[3]Gender Equality Strategy 2020 -2025: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0152&from=EN
[4]EU Anti-racism Action Plan 2020 – 2025 : https://ec.europa.eu/info/sites/info/files/a_union_of_equality_eu_action_plan_against_racism_2020_-2025_en.pdf
[5]https://ec.europa.eu/info/files/lgbtiq-equality-strategy-2020-2025_en
[6]Intersectionality considers how different social or political identities, such as gender, race, sexual orientation, ability, ethnicity, socio-economic background, age and religion, intersect and can result in different forms of discrimination or privilege.
[7]See ALTAI - The Assessment List on Trustworthy Artificial Intelligence: https://futurium.ec.europa.eu/en/european-ai-alliance/pages/altai-assessment-list-trustworthy-artificial-intelligence
[8]Education Action Plan 2021 -2027, p.12: “The [Ethics] Guidelines [for Trustworthy Artificial Intelligence] will be accompanied by a training programme for researchers and students on the ethical aspects of AI and include a target of 45% of female participation in the training activities” https://ec.europa.eu/education/sites/education/files/document-library-docs/deap-communication-sept2020_en.pdf
[9]https://ai-data-robotics-partnership.eu/
[10]https://ai-data-robotics-partnership.eu/
[11]https://eige.europa.eu/gender-mainstreaming/toolkits/gear
[12]In particular: HORIZON-WIDERA-2021-ERA-01-91. ENSURING RELIABILITY AND TRUST IN QUALITY OF RESEARCH ETHICS EXPERTISE IN THE CONTEXT OF NEW/EMERGING TECHNOLOGIES.
ver menos
Características del consorcio
Características del Proyecto
Características de la financiación
Información adicional de la convocatoria
Otras ventajas