Descripción del proyecto
Digital technologies have become pervasive in our daily lives and have fundamentally changed the ways we communicate and interact with information and with each other. We now have access to digital content anytime, anywhere and in a palette of formats, ranging from images, videos, text, speech to virtual and augmented reality experiences. Typically, this has made smartphones, and similar mobile devices, an invaluable source of information and a mean of connection to the world and to others. Yet, by relying almost exclusively on visual and auditory feedbacks, these devices pose evident accessibility issues.
Current smartphones and tablets provide accessibility features, but mostly in the form of text-to-speech, which can be impractical and affect literacy, and through connectivity to refreshable Braille displays, which involve an additional peripheral, often bulky and expensive or when small too limited. These devices still prevent the visually impaired access to graphical content and complex notations. A solution would be to develop a specific tablet for the visually impaired, but few of them exist and currently tackle all these challenges.
In ABILITY, this will be achieved by proposing a novel cost-effective actuation mechanism for multiline Braille display, relying on fewer and remote actuators able to control independently the Braille cells, and a tablet with innovative multitouch vibrotactile localised feedbacks. This device will provide multisensory interactions and feedback, leveraging AI algorithms for device adaptability to the users’ needs and behaviour for image analysis and predictive writing. The goal is to provide a multisensory device covering the wide range of visual disabilities and needs of the visually impaired population, through combinations of tactile, visual and auditory feedbacks. For this, ABILITY will adopt a user-centred design approach throughout the project to involve the users iteratively in the different design and evaluation stages.