3 key perspectives when designing for accessibility | by Christie Wong | Nov, 2023
[ad_1]
Insightful, practical takeaways that I learned from my time working with a client with disabilities.
It is encouraging to see that accessibility — the practice of making information available and usable for people with a wide range of functional abilities — is gradually moving to the forefront for businesses in the technology sector. Many tech professionals are also seeking training in WCAG, the World Wide Web Content Accessibility Guidelines, as well as learning about methodologies that encourage inclusivity.
Yet a part of me wonders — are we merely throwing these buzzwords into our conversations, without any substantial understanding on the subject? As a user experience (UX) designer who creates websites, apps, and platforms — I had this underlying, gnawing feeling that there was a gap between following accessibility standards like WCAG, and actually designing a product that would genuinely benefit those with various accessibility needs.
So as part of my education training, I decided to volunteer in the Distance Computer Comfort program at Neil Squire Society in order to gain first-hand experience working with someone who had a disability. Neil Squire Society is an organization that empowers Canadians with disabilities through accessible technology devices and programs, and in the Distance Computer Comfort program, I would help a client with disabilities be more comfortable using a computer.
I contacted Neil Squire Society about my interest, and the program coordinator matched me with a client whose needs and goals aligned with the skills I could teach. To keep the client anonymous, I will use the name “Hunter” and the gender pronouns “they/them” to refer to the client throughout this blog post.
A bit of background on Hunter’s disability: they had a childhood stroke that led to secondary dystonia (involuntary muscle contractions that can be painful) affecting one side of their body. Over the years, they had subsequent health issues and two more smaller strokes. This caused some undefinable deficits, a higher degree of stress, some degree of minor movement disorder on one side of the body, and frequent migraines. It can be difficult for Hunter to learn new things as they cannot retain information.
Hunter’s goal with this program was to be more comfortable using technology, especially using the features of Google Doc and Microsoft Word. The program was structured in a way where I met with Hunter 1 on 1 remotely on a weekly basis for 12 sessions. Each session was approximately 1.5–2 hrs long, and was conducted through Zoom without turning on our video cameras.
From a UX designer’s perspective, I was often surprised at how a feature that I had initially thought was designed well could unintentionally have a negative impact on the user experience for those who were not comfortable using technology. It saddened me to witness how Hunter would blame themselves for finding it difficult to perform a task, when it was simply perhaps something that wasn’t considered in the product design process.
As it was Hunter’s goal in this program to be more confident using Google Docs and Microsoft Word, I will be sharing examples of their experiences in these two applications. It is also critical to note that Hunter’s experience is not representative of everyone who has accessible needs; yet on the other hand, I believe that we may all relate to certain aspects of Hunter’s experience to some degree.
Another thing to mention is that I was not made aware of the assistive technologies or accessibility features that Hunter may have used during our sessions together. Therefore, I am unable to comment on how using these technologies (or not) may have impacted Hunter’s experience.
That being said, we’re now ready to dive deep into the takeaways, from three key perspectives, that I learned from my sessions with Hunter.
For the purposes of sharing Hunter’s experience to illustrate what I learned from them, I am going to make the assumption that you, the reader, have had some exposure to using a text editor tool like Google Docs or Microsoft Word.
Here’s a friendly little pop quiz for you: have you ever tried creating a table in Google Docs? If so, without looking at the interface, do you know the name of the menu that allows you to create a table in Google Docs? What about changing the colour of the table cells?
I’ll give you a little hint: those two actions aren’t in the same place. You can create a table by going to the ‘Insert’ menu at the top, but to change the colour of the table cells, you’ll have to go to ‘Table Properties’ by right clicking on the table, or using the ‘Background Fill’ in the toolbar.
Perhaps that wasn’t too difficult for those of you who consider yourselves to be tech savvy. But did you know that the ‘Background Fill’ action disappears when the screen size decreases? It collapses into a ‘More Actions’ menu as there isn’t enough space to display it on a smaller screen.
Can you imagine how all these details of where actions are located, that could also change based on circumstances, could be so discouraging for someone uncomfortable with technology to learn, let alone someone who finds it hard to remember information?
Take a moment to imagine what it’s like for Hunter:
- Knowing you’ve used a program multiple times before, but not remembering how you were able to perform a specific task in that program before
- Recognizing an interface, but it also feels unfamiliar
- Trying to do a simple task, but having to take a lot of time to understand what you’re seeing on the screen
I don’t know about you, but I think I’d be constantly discouraged, frustrated, and overwhelmed.
To be clear, I’m not saying that hiding actions behind a ‘More Actions’ menu is necessarily a bad design. This UX pattern is great for solving the issue of providing available actions for small screen sizes, and also for keeping the interface uncluttered. There is no right or wrong in design, no perfect solution — just pros and cons in seeing what is better in the context of your product. However, we need to be cautious of not using that as a reason to diminish anyone’s experience, especially for those who find it difficult to use technology.
Hunter’s experience reminded me of the foundational principles found in Don Norman’s The Design of Everyday Things, and to think more deeply about them in the perspective of someone who finds it challenging to retain information:
- Intuitive: Provide good discoverability — help users easily look for what they want to do.
- Affordance: Make it as clear as possible what users can do after they find the action they’re looking for.
- Understand, not Memorize: Help users understand what they can do and how — do not make them rotely memorize the steps on how to execute an action.
For those of us who consider ourselves to be non-disabled, we generally don’t think twice about using a keyboard or mouse to navigate various applications on the computer. So when a prompt stating that you need to press Control (or Command for Mac users) on the keyboard and then click with your mouse to open a link appears — we probably wouldn’t think much about it.
Hunter, on the other hand, was a little worried upon seeing this prompt. Imagine what it’s like for someone who has mobility issues like Hunter:
- Using only one hand to type on the keyboard and click with the mouse
- Having involuntary muscle spasms as you’re typing or clicking, which can cause unwanted double clicks or other outputs
- Wondering why you keep typing V when you meant to paste some text, and then finally realizing Sticky Keys wasn’t turned on (Sticky Keys is an accessibility feature that enables modifier keys to remain active, even after they have been pressed and released, which makes it easier to do keyboard shortcuts)
Once again, I’m not saying that this Control and click shortcut to open a link is an outright terrible design decision. There are always reasons, things behind the scenes that we don’t have an understanding of. I’m going to take a guess and say that the designers of this feature probably didn’t want the user to accidentally click and open the link when the user probably wanted to click and edit the text.
Thankfully, Hunter was able to find another way to open the link — by right clicking and selecting “Open Hyperlink” in the context menu. I was very glad to see that Microsoft Word provided alternative ways to do the same action. Hunter’s experience reminded me of the necessity of simplicity and variety — and to consider it from the physical perspective of using various devices:
- Minimize complex interactions: Simplify and reduce multi-step interactions. Avoid designing precise and small targets.
- Design multiple ways: There may be a dominant way to execute an action, but provide multiple ways for a user to complete the same task when possible.
- Consider design atoms: Choose design atoms and components that have a less challenging physical interaction when possible.
I confess that as a UX designer, I tend to think about accessibility from a visual perspective — colour contrast being the easiest one to address. However, there were other visual elements on the interface that I didn’t realize could cause confusion when its intent was to empower the user.
For example, helpful tips can appear on the interface to educate the user that there are more advanced options. When you’re in the middle of a task and you see one of those pop-ups — what do you do? Depending on the relevance of the tip, you may find it helpful and appreciate learning about a new feature that you can apply in the future. Or you may just skim the content quickly and dismiss it as fast as possible, in order to resume your task at hand. Either way, it’s usually a decision we make on the spot without thinking about it.
But that’s not what it’s like for Hunter. Imagine:
- Having your 2nd migraine of the day, but wanting to finish what you’re doing on the computer
- Doing an action in the program, but the result isn’t what you expected (because you weren’t in the right mode)
- Trying to focus on the task that you’re doing, but being interrupted and jarred by a pop-up — and then wondering if the pop-up is supposed to help you with your current task or not
I was surprised (though I shouldn’t be) how some of these tips would stay open and get in the way even when Hunter was doing something else on the interface. Of course, there are valid reasons to do so from a product design perspective. But Hunter’s experience reminded me of how the many accessibility webinars I’ve attended mentioned minimizing the use of dialog boxes — because of its interruption to the user flow and navigational challenges for keyboard-only users — and to also think about visual accessibility beyond size and contrast:
- Don’t overwhelm the user: Ensure that the elements that are visible or active aren’t overwhelming.
- Explore other patterns: Don’t default to using what most people are used to. Just because it’s a common UX pattern doesn’t mean there aren’t other ways that work better.
[ad_2]
Source link