Прескачане към основното съдържание

GenAI for AAC symbol development: recognising bias (Displayed in en-GB)

GenAI for AAC symbol development: recognising bias

Generative AI, like tools based on models such as DALL-E,  Stable Diffusion and Flux, can create images from text descriptions. For AAC, this means typing in a prompt such  as "a happy child playing foorball in a park" and hoping the result matches your expectations and fits the setting or situation you feel a potential user will recognise.  But what may make this harder than we anticipate is an issue known as 'bias'

Biases in Gen AI

GenAI isn't perfect; it's trained on massive datasets from the internet, which are full of human biases. When creating AAC symbols, these biases can creep in, leading to symbols that don't represent everyone fairly.  Bias in AI means the system favours certain groups over others—often based on race, gender, culture, or ability. In AAC, this can mean symbols that assume a "default" person is white, male, or able-bodied, leaving others out. For example

  Source Real-life example in AAC symbols
1 Training Data Bias "Doctor" → almost always a white man in a lab coat
2 Model Design Bias Company removes many images of disabled people "to avoid offense" → very few wheelchair symbols generated
3 Prompt & Interface The app only suggests "mother + father+ children" when you type "family"
4 Safety Filters Darker skin tones sometimes get lightened or blocked by filters

So it is important to look out for...

  • Gender and Racial Stereotypes:  Tools may amplify biases. For instance, generating images for "CEO" might mostly show white men, or "nurse" as women—extending to AAC symbols where professions or actions are depicted unfairly.
  • Cultural Oversights: AI might struggle with diverse languages or dialects, making symbols less relevant for non-Western cultures. In AAC, a symbol for "family" could default to a nuclear family structure, ignoring extended families common in many societies.
  • Errors in AAC-Specific Tools: In photo-based AAC vocab generators, algorithms have shown biases in recognising objects or people from diverse backgrounds, leading to incomplete or stereotypical symbols.
  • Humour and Prejudice: Even in fun elements like AI-generated humour for AAC, models can reinforce stereotypes, as seen in studies on generative AI outputs.

The Symbol Creator AI team aim to set in place further training after the beta testing period to ensure there is:

  • Diverse Training Data: using balanced datasets that include global perspectives.
  • Human Oversight: Always involving AAC users, therapists, and experts in reviewing AI outputs.
  • Ethical Guidelines: AI that empowers, without overriding user intent.

Data Training Human Oversight and Ethics

By recognising and addressing bias, GenAI has the potential to democratise AAC symbol creation.  But like any tool, it's only as good as how we use it. By staying vigilant about bias, we can ensure these innovations provide another route to providing personal preferences in symbol style selection

© 2025 Global Symbols CIC