Code editor screen reader accessiblity survey

This is a collection of questions for people who code using a screen reader, asked to improve our understanding of this workflow and hopefully prevent us from screwing it up in the next iteration of CodeMirror. I will try to summarize answers below the questions. I’m putting this on the public forum so that other people might also benefit from it.

Which screen reader do you usually use?

Do you use different screen reader settings for code and for other content? If so, what is different? Is this configured per app? Do you know of a way a piece of content can tell the screen reader that it should use a code profile, or is switching always manual?

Do you use indentation information to navigate code? Do you have your reader read whitespace to you, or do you access indentation information some other way?

How do you prefer tabs to be read? Not at all, or with a description of "tab"? Or maybe "\t" is read meaningfully on your reader?

What are common screen-reader interactions while editing code (beyond reading the current line and the typed input)?

How do you navigate long lines?

CodeMirror will often redraw the text that is being edited (for example to update syntax highlighting). Are you aware of any problems this might cause?

Is there any way to make syntax highlighting helpful to you? I’m guessing reading the token type before every token would be way too much—but maybe there are more subtle ways to communicate such info?

In other editors, do you use any addon or tool that helps you navigate the code by structure? How does this work?

Do you use a braille reader for coding? If so, what advantages does it provide and how does navigating code with a braille display differ from voice screen reading?

What does the ideal screen reader workflow for autocomplete look like?

In a code editor (inside a website), would you expect tab to insert a tab or to move to the next control? If tab is overridden, how easy is it for you to escape a focusable field?

Do you agree with the principle that accessibility should not be a separate mode, but the default view should be accessible? Or are you okay with apps providing an explicitly enabled accessible mode (which for example changes the behavior of tab)?

Do any things that code editors often get wrong come to mind? How could they do better?

1 Like

Hi,
You sent me a snapshot of the editor on July 27th and that seems to be working great. haven’t actually looked at the code that powers it, but as far as functionality goes, it works just like a standard text field for me. The only thing I’ll say is that the line numbers are hard to match up with their corresponding lines, and I don’t know an easy fix for that. A hypothetical “accessible” mode could replace those numbers with one text field that shows the current line number and allows the user to enter a new one, or you could include that in the main interface if you thought that would be useful to other users. Alternatively (or additionally), each line number could be clickable so the user could just jump their cursor to a given line.

Here are the questions; hopefully they make some sort of sense. You can always DM me if you have questions and I’ll try and edit this to clarify.

Which screen reader do you usually use?

Non-visual Desktop Access (NVDA), but VoiceOver for Mac and iOS, JAWS, and Narrator are also installed.

Do you use different screen reader settings for code and for other content? If so, what is different? Is this configured per app? Do you know of a way a piece of content can tell the screen reader that it should use a code profile, or is switching always manual?

Switching is unfortunately very manual, but I can easily change the punctuation, indentation reporting, and other settings to have NVDA read as much or as little as necessary. I can’t configure this per website (although JAWS can do that), but this might come soon and I can configure it per application for now. It’s not a big deal to me right now, and I use notepad for coding for the most part.

Do you use indentation information to navigate code? Do you have your reader read whitespace to you, or do you access indentation information some other way?

NVDA can be configured to report indentation information verbally, at the beginning of the first line where it changes. It can also indicate this with audio tones. I’m not so sure that JAWS has a way of handling this. If the website were configured to indicate this somehow, I wouldn’t recommend having it as an “always on” setting because some people will be using Braille displays to skip-read the code and look for indentations, and others will be using NVDA’s indent reporting system. Having a way to jump to the next or previous indentation change would be quite useful though.

How do you prefer tabs to be read? Not at all, or with a description of "tab"? Or maybe "\t" is read meaningfully on your reader?

Same as above. It might be good to have that as a setting if it’s possible, but I wouldn’t want it to always be on. NVDA can already report this.

What are common screen-reader interactions while editing code (beyond reading the current line and the typed input)?

Nothing big beyond the normal text-field navigation commands. I use a slightly enhanced notepad for a lot of code editing.

How do you navigate long lines?

I just use ctrl+left/right arrows to go by word, which catches most punctuation separation as well. This would be something a code editor could probably fix. Having a keyboard command to go to the next or prevous major event (e.g. the end of a function or a closing quote) would help a lot. It would have to report the next major piece of text to the screen reader though.

CodeMirror will often redraw the text that is being edited (for example to update syntax highlighting). Are you aware of any problems this might cause?

I could see it lagging out NVDA a little, but if the cursor doesn’t change position I think it will be fine. Depends on how it does it. It might be a good idea to add an option to turn that off just in cae.

Is there any way to make syntax highlighting helpful to you? I’m guessing reading the token type before every token would be way too much—but maybe there are more subtle ways to communicate such info?

There is an option in most screen-readers to manually or automatically report formatting info such as font and color, so you can always just let it work normally and have the user decide if they want those changes to be reported. Just like indentation, they’ll only be reported when they change, not once per line, but there’s a keyboard command to check the current cursor position for formatting info.

In other editors, do you use any addon or tool that helps you navigate the code by structure? How does this work?

I don’t. I am a fairly uncomplicated person who just uses notepad. This is not necessarily the most efficient way to edit code though, and I’m sure other more experienced coders have other methods.

Do you use a braille reader for coding? If so, what advantages does it provide and how does navigating code with a braille display differ from voice screen reading?

It certainly makes it easier to take in punctuation and indentation quickly, but I don’t personally use one (though I have one to test with). In general they’ll just read whatever NVDA would normally read out loud and allow navigation within the available text, so no changes will need to be made specifically to adapt for one.

What does the ideal screen reader workflow for autocomplete look like?

Within a text editor, the typical way seems to be to just highlight the autocomplete suggestion, whereupon NVDA will read it automatically as it would a normal piece of selected text. However, this is up for some discussion. The default behavior for a screen reader is to interrupt whatever is being spoken with every press of a letter. So if you make an autocomplete suggestion announce itself once, it’s easy to miss. Some developers solve this with repeated announcements, but as someone who turns off that interrupt function, I can say that it’s very annoying to hear the same thing continuously spoken if it’s not necessary. Depending on the type of alert, it can also interrupt whatever else we might be doing to announce itself, which is basically equivalent to the entire screen going dark except for the alert in the middle. The best approach might be to have a single announcement when the suggestion is available, a keystroke to repeat it, and another to insert it. This, again, depends on how the current implementation works (I’m doing this before I do my testing). If there are multiple results available, it would be alright to present a menu of them as long as it could be navigated with arrow keys and would drop my cursor right back where it was afterward.

In a code editor (inside a website), would you expect tab to insert a tab or to move to the next control? If tab is overridden, how easy is it for you to escape a focusable field?

If I’m going to be tabbing around a lot to get to other controls on the website, I’d prefer tab not to insert a tab. I’m also completely fine with using spaces instead of tabs, so I might be in the minority here. However, I realize tab is the default behavior for a lot of these editors and am fine with it (maybe there could be an option to turn it off). All screen readers have a way of escaping most focusable fields and getting back to the rest of the page.

Do you agree with the principle that accessibility should not be a separate mode, but the default view should be accessible? Or are you okay with apps providing an explicitly enabled accessible mode (which for example changes the behavior of tab)?

I agree with the principle, but I realize it’s not always realistic. However, in this case I don’t think you would need to change the interface very much to make it better for us, so having a few checkboxes or maybe a master “accessible mode switch” is not going to cause the “accessible” version to become unmaintained or something. That principle comes from cases where an entire second copy of a website is built and maintained just for blind people, and I don’t think it applies here.

Do any things that code editors often get wrong come to mind? How could they do better?

I don’t think I’m qualified to answer this question because I haven’t used many of them at all. However, the one thing I’ll say is that there are a ton of websites which overuse alerts and announcements instead of making the actual interface accessible (for instance, making a custom toggle button for a specific mode or setting, and announcing its state to the screen reader every time it’s pressed, rather than making it behave like a standard checkbox that the screen-reader can see as being “checked” or “not checked”.) This mentality causes a lot of extra headache for developers because they have to code an exception for every single thing, and can break very easily. Often there’s a way to take an interface element that sighted people can interact with, and make it fully accessible to a screen reader without having to code the website to “talk” to the screen-reader.### Which screen reader do you usually use?
Non-visual Desktop Access (NVDA), but VoiceOver for Mac and iOS, JAWS, and Narrator are also installed.

Do you use different screen reader settings for code and for other content? If so, what is different? Is this configured per app? Do you know of a way a piece of content can tell the screen reader that it should use a code profile, or is switching always manual?

Switching is unfortunately very manual, but I can easily change the punctuation, indentation reporting, and other settings to have NVDA read as much or as little as necessary. I can’t configure this per website (although JAWS can do that), but this might come soon and I can configure it per application for now. It’s not a big deal to me right now, and I use notepad for coding for the most part.

Do you use indentation information to navigate code? Do you have your reader read whitespace to you, or do you access indentation information some other way?

NVDA can be configured to report indentation information verbally, at the beginning of the first line where it changes. It can also indicate this with audio tones. I’m not so sure that JAWS has a way of handling this. If the website were configured to indicate this somehow, I wouldn’t recommend having it as an “always on” setting because some people will be using Braille displays to skip-read the code and look for indentations, and others will be using NVDA’s indent reporting system. Having a way to jump to the next or previous indentation change would be quite useful though.

How do you prefer tabs to be read? Not at all, or with a description of "tab"? Or maybe "\t" is read meaningfully on your reader?

Same as above. It might be good to have that as a setting if it’s possible, but I wouldn’t want it to always be on. NVDA can already report this.

What are common screen-reader interactions while editing code (beyond reading the current line and the typed input)?

Nothing big beyond the normal text-field navigation commands. I use a slightly enhanced notepad for a lot of code editing.

How do you navigate long lines?

I just use ctrl+left/right arrows to go by word, which catches most punctuation separation as well. This would be something a code editor could probably fix. Having a keyboard command to go to the next or prevous major event (e.g. the end of a function or a closing quote) would help a lot. It would have to report the next major piece of text to the screen reader though.

CodeMirror will often redraw the text that is being edited (for example to update syntax highlighting). Are you aware of any problems this might cause?

I could see it lagging out NVDA a little, but if the cursor doesn’t change position I think it will be fine. Depends on how it does it. It might be a good idea to add an option to turn that off just in case.

Is there any way to make syntax highlighting helpful to you? I’m guessing reading the token type before every token would be way too much—but maybe there are more subtle ways to communicate such info?

There is an option in most screen-readers to manually or automatically report formatting info such as font and color, so you can always just let it work normally and have the user decide if they want those changes to be reported. Just like indentation, they’ll only be reported when they change, not once per line, but there’s a keyboard command to check the current cursor position for formatting info.

In other editors, do you use any addon or tool that helps you navigate the code by structure? How does this work?

I don’t. I am a fairly uncomplicated person who just uses notepad. This is not necessarily the most efficient way to edit code though, and I’m sure other more experienced coders have other methods.

Do you use a braille reader for coding? If so, what advantages does it provide and how does navigating code with a braille display differ from voice screen reading?

It certainly makes it easier to take in punctuation and indentation quickly, but I don’t personally use one (though I have one to test with). In general they’ll just read whatever NVDA would normally read out loud and allow navigation within the available text, so no changes will need to be made specifically to adapt for one.

What does the ideal screen reader workflow for autocomplete look like?

Within a text editor, the typical way seems to be to just highlight the autocomplete suggestion, whereupon NVDA will read it automatically as it would a normal piece of selected text. However, this is up for some discussion. The default behavior for a screen reader is to interrupt whatever is being spoken with every press of a letter. So if you make an autocomplete suggestion announce itself once, it’s easy to miss. Some developers solve this with repeated announcements, but as someone who turns off that interrupt function, I can say that it’s very annoying to hear the same thing continuously spoken if it’s not necessary. Depending on the type of alert, it can also interrupt whatever else we might be doing to announce itself, which is basically equivalent to the entire screen going dark except for the alert in the middle. The best approach might be to have a single announcement when the suggestion is available, a keystroke to repeat it, and another to insert it. This, again, depends on how the current implementation works (I’m doing this before I do my testing). If there are multiple results available, it would be alright to present a menu of them as long as it could be navigated with arrow keys and would drop my cursor right back where it was afterward.

In a code editor (inside a website), would you expect tab to insert a tab or to move to the next control? If tab is overridden, how easy is it for you to escape a focusable field?

If I’m going to be tabbing around a lot to get to other controls on the website, I’d prefer tab not to insert a tab. I’m also completely fine with using spaces instead of tabs, so I might be in the minority here. However, I realize tab is the default behavior for a lot of these editors and am fine with it (maybe there could be an option to turn it off). All screen readers have a way of escaping most focusable fields and getting back to the rest of the page.

Do you agree with the principle that accessibility should not be a separate mode, but the default view should be accessible? Or are you okay with apps providing an explicitly enabled accessible mode (which for example changes the behavior of tab)?

I agree with the principle, but I realize it’s not always realistic. However, in this case I don’t think you would need to change the interface very much to make it better for us, so having a few checkboxes or maybe a master “accessible mode switch” is not going to cause the “accessible” version to become unmaintained or something. That principle comes from cases where an entire second copy of a website is built and maintained just for blind people, and I don’t think it applies here.

Do any things that code editors often get wrong come to mind? How could they do better?

I don’t think I’m qualified to answer this question because I haven’t used many of them at all. However, the one thing I’ll say is that there are a ton of websites which overuse alerts and announcements instead of making the actual interface accessible (for instance, making a custom toggle button for a specific mode or setting, and announcing its state to the screen reader every time it’s pressed, rather than making it behave like a standard checkbox that the screen-reader can see as being “checked” or “not checked”.) This mentality causes a lot of extra headache for developers because they have to code an exception for every single thing, and can break very easily. Often there’s a way to take an interface element that sighted people can interact with, and make it fully accessible to a screen reader without having to code the website to “talk” to the screen-reader.

Which screen reader do you usually use?

On the desktop I use NVDA.

Do you use different screen reader settings for code and for other content? If so, what is different? Is this configured per app? Do you know of a way a piece of content can tell the screen reader that it should use a code profile, or is switching always manual?

I use a different configuration profile when coding. The only setting that’s different to my normal configuration profile (that I can think of) is the reporting of indented lines. NVDA can automatically enable a configuration profile based on the currently running process but unfortunately there is no mechanism for websites to do the same.

Do you use indentation information to navigate code? Do you have your reader read whitespace to you, or do you access indentation information some other way?

I do. I have set NVDA to tell me the number of spaces in the beginning of each line as long as it’s different to that of the previous line. So, for example:

    foo
    bar
        baz

would be read as:

4 space foo
bar
8 space baz

How do you prefer tabs to be read? Not at all, or with a description of "tab"? Or maybe "\t" is read meaningfully on your reader?

I would prefer CodeMirror preserve the tabs exactly as they are in the code. This lets the screen reader read the indentation as the user sees fit as well as see it on a braille display.

What are common screen-reader interactions while editing code (beyond reading the current line and the typed input)?

Personally I use very few interactions that are specific to a screen reader. Arrows, page up/down, home/end are basically all I use on the keyboard as well as some editor features depending on what I have available.

How do you navigate long lines?

I tend to use ctrl+left/right to scroll through the line quicker. Also, sometimes I might start reading the line from the end if I know the information is located near that point.

CodeMirror will often redraw the text that is being edited (for example to update syntax highlighting). Are you aware of any problems this might cause?

Not anything that you haven’t addressed already. Keyboard focus needs to stay exactly at the same position for every user anyway. If you end up creating any content for screen reader / AT users only then you’d need to refresh that as well.

Is there any way to make syntax highlighting helpful to you? I’m guessing reading the token type before every token would be way too much—but maybe there are more subtle ways to communicate such info?

Earcons, i.e. extremely short sound clips would be the only feasible way (at your disposal) that I can think of to implement syntax highlighting. Any added speech would be just a distraction IMO.

In other editors, do you use any addon or tool that helps you navigate the code by structure? How does this work?

I have used code outlining features in some editors/IDE’s. Those show the structure of the code as a tree view.

Do you use a braille reader for coding? If so, what advantages does it provide and how does navigating code with a braille display differ from voice screen reading?

I use one sometimes. It lets me glance at the code much quicker than what would be possible with speech. With a speech synthesizer I’m constrained to reading the line as a whole or in chunks (characters or words). With a braille display I can just read and focus on the part of the line that’s most important to me. It makes it much easier to find and correct typos for example.

What does the ideal screen reader workflow for autocomplete look like?

Personally I would prefer something like the following:

  1. Play a short earcon when an autocomplete suggestion is available.
  2. Activate a floating list or such for selecting the suggestion with a keystroke (something like ctrl+space). This lets the user ignore the suggestion and continue using the arrow keys as normal if they wanted to.
  3. Select the suggestion by pressing enter, insert the suggestion and move the cursor to the end of the inserted text.

This approach isn’t entirely workable. For example, it ignores entirely the users that are hearing impaired. Someone with more experience of working only with a braille display should weigh in on this.

In a code editor (inside a website), would you expect tab to insert a tab or to move to the next control? If tab is overridden, how easy is it for you to escape a focusable field?

I would expect tab to insert whitespace. F6 is a common shortcut for moving between focusable panes of an application, and I think it could be used just fine for moving the focus out of the code editor if CodeMirror itself provides any other focusable elements. Personally I would be fine with trapping the tab key inside CodeMirror.

Do you agree with the principle that accessibility should not be a separate mode, but the default view should be accessible? Or are you okay with apps providing an explicitly enabled accessible mode (which for example changes the behavior of tab)?

I would rather see accessibility features integrated in to the default mode as much as possible.

Do any things that code editors often get wrong come to mind? How could they do better?

I think earcons should be used a lot more than they are being used now. They would make reading code with speech a lot faster. But then it could be argued that providing these earcons should really be the job of the screen reader.

(Responses from Dickson Tan.)

Which screen reader do you usually use?

NVDA

Do you use different screen reader settings for code and for other content? If so, what is different? Is this configured per app? Do you know of a way a piece of content can tell the screen reader that it should use a code profile, or is switching always manual?

Yes, typically configured per app for e.g VS Code. When programming, I have more punctuation read, and indentation notifications turned on. Profiles can be used automatically when you move to an application, or switched to manually.

Do you use indentation information to navigate code? Do you have your reader read whitespace to you, or do you access indentation information some other way?

Yes, has been traditionally indicated verbally, “4 spaces” for e.g but that is verbose. NVDA lets you use the pitch of tones to indicate how indented something is relative to the previous line, so this is much closer to the experience that a sighted programmer has, as its easy to tell if a line has the same indentation as the previous, or whether its more or less indented.

How do you prefer tabs to be read? Not at all, or with a description of “tab”? Or maybe “\t” is read meaningfully on your reader?

Screen readers already handle “\t” themselves.

What are common screen-reader interactions while editing code (beyond reading the current line and the typed input)?

Looking for specific text (e.g, ctrl+f), accessing the list of errors etc. This is pretty similar to what anyone else would do though.

How do you navigate long lines?

Typically navigation by word, but I don’t encounter this often since this is not good coding style.

CodeMirror will often redraw the text that is being edited (for example to update syntax highlighting). Are you aware of any problems this might cause?

This could be browser dependent, and I’ll have to test it. Not any that comes to mind though.

Is there any way to make syntax highlighting helpful to you? I’m guessing reading the token type before every token would be way too much—but maybe there are more subtle ways to communicate such info?

This is something I’ve been thinking about myself, the ideal would be for the screen reader to either change the way it reads a highlighted word etc in a different voice (pitch/rate etc) or play a sound when it encounters that token. My primary screen reader, NVDA, needs to be refactored first before it can support this. Another screen reader, Jaws for Windows, is able to provide this sort of experience for Microsoft Word to indicate different formatting attributes, but this requires that this information be exposed to it in some way. I know Word does this with UIA, but am unsure if this is an equivalent way of doing it through ARIA, besides the basic HTML5 attributes of etc.

In other editors, do you use any addon or tool that helps you navigate the code by structure? How does this work?

The usual tools for viewing class hierarchies, listing methods in a file are very useful to quickly get a sense of how the file is structured and to move to a specific place.

Do you use a braille reader for coding? If so, what advantages does it provide and how does navigating code with a braille display differ from voice screen reading?

No, don’t know enough to really comment about this.

What does the ideal screen reader workflow for autocomplete look like?

As the autocomplete menu appears, it should announce the first item, e.g when you type “pr”, it says “print, suggestion, 1 of 5”. As you arrow up/down to select a different one, it should read the suggestion currently focused. When you press enter/tab to accept it, it would also say “inserted print”. This is implemented in VS Code, but has a few bugs.

In a code editor (inside a website), would you expect tab to insert a tab or to move to the next control? If tab is overridden, how easy is it for you to escape a focusable field?

VS Code does this well I think. It uses ctrl+m to toggle the state of whether tabs are trapped and insert the tab character, or whether tab lets you escape the editor. However, if in a website, tab should not be trapped by default. If its trapped, its still possible to move to the next field, though it might take a bit more time.

Do you agree with the principle that accessibility should not be a separate mode, but the default view should be accessible? Or are you okay with apps providing an explicitly enabled accessible mode (which for example changes the behavior of tab)?

The default being as accessible as possible would be best just to reduce maintenance burden. Though in VS Code, performance concerns etc means that some features that screen readers use are only available when a certain mode is activated. I’m OK with that as well, since this is beyond the editor developer’s control usually, and can’t be helped.

Do any things that code editors often get wrong come to mind? How could they do better?

Using nonstandard elements (e.g, or a content editable) would be the biggest thing, since a lot of accessibility support comes for free when using those. Besides that, inappropriate element labelling, or improper/no ARIA usage. If I’m talking about desktop editors like Visual Studio itself, the accessibility implementation for that is problematic.

NVDA on Windows. I also use VoiceOver on iOS (iphone), but never code on that platform.

Configuration profiles per app are supported in NVDA, not per website or specific type of control though. I have a profile for VSCode that uses tones to identify indentation (a standard NVDA feature) and I can activate the profile or this setting manually as well.

I usually work with a braille display and speech output. On the braille display it’s easy to see if there is indentation, because white space is represented with empty braille cells. I use the tone indentation feature as well, see above.

I would prefer them to be included in the content that’s accessible to the screen reader, so it can read them based on the users’ preferences and display them properly in braille.

The usual text interactions should be spoken correctly, navigating by word for example. That basically means that cursor movement should be picked up by the screen reader in time. I know VSCode had issues with this, because there repainting of the editor contents took longer than NVDA’s cursor timeout.

I usually use the so called routing keys on my braille display to move the cursor to the position where I want to edit the line. A routing key is a small button above a braille cell, that allows moving the cursor the the character displayed in that cell. This should just work if the editor listens to events that set the cursor to a specific position.

I don’t see specific issues here, perhaps some performance issues. See the cursoring issue I described above.

As long as the formatting change is communicated to the screen reader, the reader should be able to present this in a useful way to the user. NVDA currently lacks a bit here, since the only option is to speak formatting changes, where short sounds (also called earcons) would be more helpful. I think this is doable if you use a default content-editable. From what I know, ARIA and related techniques have no attributes to indicate this kind of stuff. For example VSCode uses a textarea element as interface to the screen reader and that lacks formatting info.

In VSCode I use the outline to quickly jump to functions, classes, variables etc. In other editors I always got by with the search function, but an outline is more efficient.

The advantage is that I can read one or more lines of code and I can see character for character what’s there. With speech I would have to set the symbol level very high and get everything spoken, or navigate word by word and spell to see exactly what’s there. It really helps when debugging a syntax error or reading lisp-like languages with lots of parentheses. Things announced to speech users via techniques like ARIA alerts or live regions are not brailled (at least not in NVDA), so that’s something to be aware of.

Good question. Autocompletes should be indicated in some way (e.g. speech or sound).
The completions should be readable in braille as well, but the announcement in braille seems less useful to me, because you can’t read braille and have your fingers on the keyboard for typing at the same time. Especially on larger braille displays, you would like to see your code and the completion in context, but this is not implemented correctly anywhere right now. I think we still miss a good interface to communicate that to a screen reader. If the suggestions are marked up as a list, a screen reader should be able to report the suggestion and the number of suggestions (e.g. “Foo, 1 of 5”).

I would expect it to move from element to element by default, but that it’s possible to set it to insert a tab. If it will not move focus by default, this might cause trouble for keyboard (but non-screen reader) users, who might not discover the option to change the behavior. This would also violate WCAG guideline 2.1.2.

Yes, it should be the default if at all possible. Screen reader detection is currently not possible on the web, so if there is a separate mode that has better accessibility, it should be discoverable by the user and it should be easy to turn it on/off. Also, consider making most accessibility related options (such as tab movement) configurable separately from a special screen reader mode, since they might benefit other user groups without screen readers as well.

A lot of code editors are just inaccessible, being as accessible as possible out of the box is a great start. Autocompletions often get wrong, because they are hard to implement.

(Response from Venkatesh Potluri)

Which screen reader do you usually use?

I Primarily use VoiceOver on Os X. However, to the best of my knowledge, there is a good number of blind developers on Windows as well that rely on NVDA and JAWS.

Do you use different screen reader settings for code and for other content? If so, what is different?

Yes. I use different screen reader settings for code and different content. The main difference is in the punctuation verbosity setting. While writing code, I set the punctuation verbosity set to “all” so that I can hear all symbols as a part of the syntax. While reading other forms of content, I prefer to have my punctuation verbosity set to “some”. Having the screen reader’s Text To Speech (TTS) use intonation and other speech variations for punctuations is better than listening to every single punctuation when I am not reading code.

Is this configured per app? Do you know of a way a piece of content can tell the screen reader that it should use a code profile, or is switching always manual?

Most screen readers let you make configuration changes per app, and this is what I primarily use. To the best of my knowledge, there is no way to detect whether a particular element or block of text contains code.

Do you use indentation information to navigate code? Do you have your reader read whitespace to you, or do you access indentation information some other way?

Yes. I do use indentation information to navigate and write code. Specially in languages like python where having the incorrect indentation could change the semantic meaning of the program. I prefer using tabs in such scenarios as this way the screen reader announces the number of tabs at the beginning of each line. However, there is a limitation here. Any number of tabs >= 3 are also announced as “tab tab tab” by VoiceOver. I am not sure under what exact conditions does the screen reader announces the number of white spaces so I prefer tabs even though the experience is limiting and at times misleading. I very strongly believe that there should be smarter, less verbose way of denoting indentation information, and I would be happy to brainstorm more on that if you’d like to.

How do you prefer tabs to be read? Not at all, or with a description of “tab”? Or maybe “\t” is read meaningfully on your reader?

I’d prefer to have tab information read aloud as “tab” followed by the text for a single tab, “tab tab” followed by the text for 2 consecutive tabs, “3 tabs” followed by the text for 3 consecutive tabs, and so on. I am not sure if having the screen reader announce “\t” would be a good option. However, that’s just my opinion.

What are common screen-reader interactions while editing code (beyond reading the current line and the typed input)?

Common screen reader interactions while writing or editing code include:

  • moving to different lines of code in the edit window.

  • moving to the beginning or end of a code block with a single keystroke.

  • navigating to a particular class or function definition using a single keystroke (similar to the F12 key in Visual Studio).

  • to move by word or keyword/token in a line of code.

  • get information about errors that I may have caused while writing code.

This is not an exhaustive list of actions that I perform. I will keep adding in case I recall any more.

How do you navigate long lines?

I have the screen reader read code to me by line. If I feel a particular line is long, I go through the line by word. In the context of programming, it would help if I am able to navigate by keyword or token instead of general english words.

CodeMirror will often redraw the text that is being edited (for example to update syntax highlighting). Are you aware of any problems this might cause?

I am not aware of any. However, I think we should be mindful of the screen reader making repeat announcements or changing focus. Can I check the hosted instance (point 2 in your email) and get back to you on this?

Is there any way to make syntax highlighting helpful to you? I’m guessing reading the token type before every token would be way too much—but maybe there are more subtle ways to communicate such info?

Actually, reporting the token type may not be a bad idea. Just that this information can be conveyed after a user leaves the cursor idle on a particular part of the code for a few seconds.

In other editors, do you use any addon or tool that helps you navigate the code by structure? How does this work?

Yes. My broad research theme for the past year being accessible programming environments for developers with visual impairments, Our team at Microsoft Research India has explored challenges faced by developers with visual impairments and we’ve developed an extension, CodeTalk that runs on Visual Studio. It has been a great productivity boost for me personally. Also, we have open sourced the plugin, so please feel free to take a look at this GitHub repository

With regards to how it works, our paper could give you some details. I would be happy to talk to you about this in greater detail, and give you a demo if you’d like. Also, it would be of great value to me to know your thoughts on this effort given your experience with IDEs.

What does the ideal screen reader workflow for autocomplete look like?

The ideal autocomplete experience to me would have a workflow similar to this:

As the user types code, the screen reader should announce the first best suggestion, followed by “1 of <NumberOfSuggestions>. The user should then be able to navigate between these suggestions using the up and down arrow keys. pressing tab or enter should insert the selected suggestion. The frequency of announcing these suggestions should be kept in mind though. The announcements should happen immediately after the user stops typing and should not interrupt the screen reader speech while the user is typing. Maybe aria-liveregions could come handy to implement this? When the user is calling a function, after entering the “(“, the screen reader should announce the documentation comment from the function definition corresponding to the parameter that needs to be sent.

In a code editor (inside a website), would you expect tab to insert a tab or to move to the next control? If tab is overridden, how easy is it for you to escape a focusable field?

In my opinion a tab in a code editor should insert a tab or an autocomplete suggestion. It is very easy to move focus away from the field if required.

Do you agree with the principle that accessibility should not be a separate mode, but the default view should be accessible? Or are you okay with apps providing an explicitly enabled accessible mode (which for example changes the behavior of tab)?

I agree with this principle for the most part. However, I also do believe that it is OK to have applications behave differently when screen readers are enabled for a better user experience. That said, maximum care should be taken while introducing these different behaviours. I believe a simple way to think about this is to answer the question if a blind and a sighted developer can work together (pare-program in this case) if you introduce an inconsistent behaviour?

Do any things that code editors often get wrong come to mind? How could they do better?

Yes. i’ve tried my best to explain all the things that can go wrong with screen reader experience of IDEs in our CodeTalk paper. happy to chat more if you find this information insufficient or unclear.