Skip to main content

Why the “Human in the Loop” Is Critical When Using AI to Develop Symbols for AAC Users

David Banes

Why the “Human in the Loop” Is Critical When Using AI to Develop Symbols for AAC Users

Introduction

Artificial intelligence is beginning to change the way we create, adapt and personalise communication supports. For people who use Augmentative and Alternative Communication, or AAC, this creates exciting possibilities. AI can help generate symbols more quickly, support culturally relevant imagery, respond to local vocabulary needs, and reduce the long delays that often exist between identifying a communication need and having a usable symbol available.

But when we talk about AI-generated symbols for AAC, one principle must remain central: there must always be a human in the loop.

This is not simply a technical safeguard. It is an ethical, linguistic, cultural and communication requirement. AAC symbols are not just pictures. They are part of a person's voice. They help people express needs, preferences, feelings, questions, opinions, humour, identity and relationships. When AI is used to create those symbols, human judgement is essential to ensure that the result is meaningful, respectful, understandable and safe for the person who will rely on it.

Symbols are communication, not decoration

At first glance, generating symbols may seem like a visual design task. A person enters a word or phrase, the AI creates an image, and the image is placed on a communication board or device. But AAC symbols are not illustrations in the ordinary sense. They are communication tools.

A symbol must do more than look attractive. It must help a person understand and express a concept. It must be clear enough to support recognition, consistent enough to fit within a wider symbol system, and flexible enough to work across different communication contexts. A symbol for 'help', for example, is not just an image of one person assisting another. It may need to show urgency, choice, dependence, independence, social interaction or a general request for support. The intended meaning matters.

AI may generate an image that looks visually impressive but fails as a communication symbol. It might include too much detail, show the wrong action, rely on cultural assumptions, or introduce visual distractions. A human reviewer, especially someone with knowledge of AAC and the individual user, can judge whether the symbol actually supports communication.

Communication 51805

AAC users are not a single group

AAC users include people with a wide range of ages, disabilities, languages, cultures, life experiences and communication needs. Some people use symbols to support emerging language. Others use them to access complex academic, professional or social vocabulary. Some need highly concrete images; others can understand more abstract representations. Some communicate through direct touch, while others use eye gaze, switches, scanning, partner-assisted access or other methods.

AI tools do not automatically understand these differences. Without human guidance, they may produce symbols that are too childish for an adult, too abstract for a young learner, too visually complex for someone with cortical visual impairment, or too culturally specific for a user in another country.

The 'human in the loop' helps match the symbol to the user. This may involve a speech and language therapist, teacher, family member, support worker, symbol designer, AAC specialist, or most importantly, the AAC user themselves. Their role is to ask: does this symbol make sense for this person, in this setting, for this purpose?

Therapist and Aac User involved in symbol acceptance

Meaning is culturally shaped

Symbols are never culturally neutral. The way people dress, eat, greet each other, show emotions, use tools, travel, pray, learn, work and socialise varies across communities. A symbol that is obvious in one culture may be confusing or inappropriate in another.

AI systems are often trained on large collections of images from the internet. These datasets may overrepresent certain cultures, countries, languages, body types, environments and lifestyles. As a result, the AI may default to Western assumptions about homes, schools, clothing, family structures, technology, food or public services. It may generate images that do not reflect the lived reality of the AAC user.

This matters because AAC should support identity and belonging. A child in Kenya, India, Brazil or Jordan should not have to communicate using symbols that only represent North American or European environments. A person should be able to see familiar objects, local foods, appropriate clothing, relevant religious or cultural practices, and people who look like those in their community.

Human review is therefore essential. People with local knowledge can identify when a symbol feels wrong, when it reinforces stereotypes, or when it needs to be adapted. They can also help guide AI prompts so that symbols are generated in ways that better reflect local cultures and languages.

Bulgarianness 4819923 April in BulgariaPrayer 7654

AI can reproduce bias and stereotypes

AI systems can produce biased outputs because they learn from biased data. In symbol generation, this can show up in subtle but important ways. A prompt for 'doctor'; may generate a man. A prompt for 'nurse'; may generate a woman. A prompt for 'teacher' may produce a particular age, race or gender. A prompt for 'disabled person' may focus on a wheelchair, even though disability is far more diverse. A prompt for 'family' may assume a narrow family structure.

For AAC users, these patterns can shape how the world is represented. If a communication system repeatedly presents men as leaders, women as carers, disabled people as passive, or certain cultures as 'other', it sends powerful messages about who belongs and who has agency.

Human oversight helps identify and challenge these biases. Reviewers can ask whether symbols show diversity in age, gender, ethnicity, disability, body shape and social role. They can ensure that people with disabilities are represented as active participants, not simply as recipients of care. They can also check whether sensitive concepts are presented respectfully and accurately.

One Side of the scales is weighted down to show bias

Some concepts require careful judgement

Many AAC symbols relate to everyday activities: eat, drink, play, sleep, go, stop, more, finished. But AAC users also need access to more complex and sensitive language. They may need symbols for pain, consent, bullying, abuse, medication, privacy, relationships, mental health, religion, identity, grief, politics or personal safety.

These concepts cannot be left to AI without careful human review. A poorly generated symbol for a sensitive topic may be misleading, frightening, infantilising or unsafe. A symbol for 'pain' needs to help the person communicate where and how they hurt. A symbol for 'private' must be clear without being inappropriate. A symbol for 'no' or 'stop' may be vital for safeguarding and personal autonomy.

The human in the loop ensures that sensitive symbols are handled with care. This includes checking not only the image itself, but also the label, the intended use, the age appropriateness, and how the symbol fits within a wider communication strategy.

Pain Always Private Feelings

Consistency matters in symbol systems

AAC users often learn symbols over time. They build familiarity with a visual language. Consistency helps users recognise patterns, develop vocabulary, and move from single words to more complex communication.

AI-generated images can vary widely from one prompt to another. The same character may look different across symbols. The style may change. The level of detail may shift. A symbol may use a realistic image in one case and a cartoon-like image in another. For some users, this inconsistency can make learning harder.

Human review helps maintain coherence. A person can check whether the symbol matches the style, structure and design principles of the wider symbol set. They can decide whether a symbol should use a person, an object, an action scene, a facial expression, an arrow, or another visual convention. They can simplify or reject images that do not fit.

This is particularly important when AI is used to expand existing open symbol sets or create bilingual and multilingual resources. The generated symbol should not feel like a random image dropped into a communication board. It should belong within the user's communication system.

Communication Board with tawasol symbols

The AAC user should be part of the loop

When we say 'human in the loop', we should not only mean professionals. The AAC user should be involved wherever possible. They are the person most affected by the symbol. Their interpretation, preference and lived experience matter.

This involvement may look different for different users. Some may give direct feedback:'I like this one', 'that does not mean school', 'make it look like my bus', or 'that person should look older'. Others may show through use whether a symbol is understood, selected, avoided or confused with another symbol. Observation, partner feedback and structured trials can all help.

Including AAC users in the process respects their agency. It also improves quality. A symbol that makes sense to a designer may not make sense to the person using it. A symbol that appears technically correct may not feel right. Human-in-the-loop design should therefore be participatory, not merely supervisory.

A Group of Wheelchair Users and Others in Chairs

AI is useful, but it should not be the final authority

AI can be a powerful assistant in symbol development. It can help generate first drafts, explore visual options, localise content, create culturally specific examples, and speed up the production of vocabulary that has been missing from traditional symbol sets. This is especially valuable in low-resource languages or communities where AAC materials are scarce.

However, AI should not be treated as the final authority on meaning. It does not know the user. It does not understand communication intent in the way people do. It does not carry responsibility for the consequences of misunderstanding. Humans do.

A strong workflow might involve using AI to generate candidate symbols, followed by review against AAC design criteria, cultural and linguistic checks, accessibility assessment, user testing, and final approval. This process does not remove the efficiency benefits of AI. It makes those benefits safer and more useful.

Symbol Voting Criteria

Human oversight protects trust

Trust is central to AAC. Users, families, educators and clinicians need confidence that a symbol means what it is intended to mean. They need to know that symbols have been checked, that harmful or confusing images have been removed, and that the system is designed around the user's communication rights.

If AI-generated symbols are introduced without review, mistakes can quickly undermine confidence. A single inappropriate or confusing symbol may make families or professionals reluctant to use AI-supported tools. Worse, it may limit the AAC user's ability to communicate clearly.

Human oversight helps build trust. It shows that AI is being used responsibly, with respect for the person's language, identity and autonomy.

AI Inside a Chip as  Jellow 20260513115622Justice With Right ArrowMe

The future is collaborative

The future of AI in AAC should not be framed as humans versus machines. The best model is collaborative. AI can increase speed, variety and access. Humans provide meaning, context, ethics, lived experience and accountability.

For AAC users, this balance is critical. Communication is a human right. Symbols are one pathway to exercising that right. When AI helps create those symbols, we must ensure that technology serves the communicator, rather than forcing the communicator to adapt to whatever the technology produces.

The human in the loop is not a barrier to innovation. It is what makes innovation responsible. It ensures that AI-generated symbols are not only visually appealing, but communicatively useful, culturally relevant, accessible, respectful and grounded in the real lives of AAC users.

In the end, AAC symbol development is not about producing images at scale. It is about supporting people to say what they want to say, when they want to say it, in ways that others can understand. AI can help us do that faster and more creatively. But only human judgement, and especially the involvement of AAC users themselves, can ensure that we do it well.

Human Loop for AI as written about in the article with boxes around the edge for the various headings

© 2026 Global Symbols CIC