Neat.
How accessible are these sorts of immediate-mode UIs to those that rely on screen readers to use computers, i.e. the blind?
Neat.
How accessible are these sorts of immediate-mode UIs to those that rely on screen readers to use computers, i.e. the blind?
Iāve yet to see one that implements screen reader support, especially x-plat. Accessibility varies wildly between platforms, so this is hard.
Another area that many oss frameworks have issue with is with internationalization and localization. Many are missing support for non-Western text.
I canāt speak generally for all immediate mode libraries, but I think they primarily optimize the ease of developing ādebug GUIsā alongside a larger app like a game or a game engine. For example, something that you might quickly plug into your game to profile, edit and configure assets, etc. For big audience GUI apps, itās probably better to go for more traditional GUI toolkits.
I think Dear ImGui does support non-Western text pretty well nowadays, so those sort of things have improved a fair bit.
AFAIK historically the problem was that screen readers relied on operating system hooks, and those hooks often were deeply integrated with the systemās UI framework. So if youāre rendering through a lower level 3D API youāre pretty much locked out unless you maintain some sort of hidden āshadow-DOMā using invisible system UI components (but then the question is if screen readers even recognize hidden system UI components).
There is now a portable accessibility toolkit which provides a common wrapper API over platform-specific accessibility APIs (GitHub - AccessKit/accesskit: Accessibility infrastructure for UI toolkits), but this is also missing adapters (for instance I donāt see one for web applications and as far as Iām aware there is no accessibility web API which could be the base for an AccessKit adapter.
In the end itās a chicken egg situation, not enough UI framework users care about accessibility to justify putting work into accessibility but instead demand other missing features to be implemented first.
PS: I also would expect that image-recognition-based screen readers would also help solving the accessibility problem from the other end, so maybe the whole work to integrate the traditional accessibility APIs isnāt worth the hassle if some fancy AI-based solution can serve as a āmediatorā between the UI and user.
PPS: I think a good plan for custom UI frameworks would be a ābring your own accessibility implementationā, e.g. an interface which exposes āaccessibility-friendlyā data on the internal UI representation so that user-side code can glue that to either a platform-specific accessibility API or a cross-platform API like AccessKit. For instance at least Dear ImGui already delegates rendering and the entire window system glue to the API user, ābring your own accessibility glueā would just be an extension of that philosophy.
Iāve yet to see one that implements screen reader support, especially x-plat.
There are some projects in the Rust ecosystem which are integrated with AccessKit, most notably egui, see the list of projects here:
PS: I also would expect that image-recognition-based screen readers would also help solving the accessibility problem from the other end
I canāt say I know this space almost at all, but I also feel like this can be a very successful way to implement screen reading for a wide range of apps.
OCR a screen is not the best result, but better than nothing.
Accessibility APIs provide for navigation, visibility into element relationships, and element with focus, and the ability to read out information that doesnāt exist as text on the screen.
Except for the ability to read text thatās not visible on screen I would expect a good UI system to provide all those hints to an OCR-based screen reader (e.g. the UI element with focus should be visibly different from other UI elements, element hierarchy should be defined by visual containment etcā¦).
After all, people without disabilities are also limited to the visual cues the UI system provides and donāt have access to the internal data representation.
This is an area where all the AI researchers could do something actually useful, build a universal system which allows to describe a UI system to a blind person and in turn control the UI by voice commands, and which is flexible enough to not be hardwired to the visuals of a specific UI (of course this wonāt happen because thereās no billions of VC money to rake in with fixing accessibility).
Designing for accessibility is human labor. OCR/ML/technology can help, but will not replace that labor.
For example, Iāve seen a lot of people attempt to make apps that will translate sign language into text. This can help, but does not actually create a connection with the Deaf community and will not help you understand their needs, history, and experiences in the same way taking a sign language class will. Eventually, it begins to feel less like an earnest attempt to increase accessibility (and I do believe people want to help) and more like an attempt to abstract away differences between people.
Accessibility is a very big and important topic, and it extends far beyond screen readers:
Etc.
I think we understand that developers working on passion projects with little time and few resources donāt necessarily have an obligation to consider all possible accessibility concerns, but at least considering some from the beginning and being intentional is a better start for most of us, especially when designing a framework or library. This is something the web did right on many accounts, and a lesson we need to learn.
I think we understand and can assume (as is demonstrated by this thread) that most of us care about this on a human level and are willing to engage with accessibility and are interested in finding good technical solutions, so this isnāt a criticism, I just think it is important to remember that accessibility is social more than it is technological.
I donāt believe it is possible to make a universally accessible application, but we can make choices that cater well to many needs (consider your audience!) with less effort than one might think.
That really wouldnāt be a problem when the different platform owners (Microsoft, Apple, Google+Mozilla, and whoever feels responsible for the Linux desktop) would provide the necessary hooks in a somewhat standardized way and without having to opt in to a specific UI framework, e.g. something like OpenGL, but for accessibility instead of 3D rendering).
Such an accessibility standard would also need to nudge the programmer into the right direction (which accessibility features are must haves and which are ānice to haveā), and the code talking to this hypothetical accessibility standard library needs to be bloat free and low-maintenance (otherwise thereās no way hobby coders will commit to it).
E.g. I see the ball mainly in the court of the operating system / platform owners. If itās just like a couple of hours work for a hobby coder to add accessibility for their apps across all platforms then nobody will say no. If it is considerably more work, then thereās little hope of that happening.
Agreed on all counts.
Is there some way in which this is different from an app which translates, letās say Thai, into English? Iām certain to learn a lot more about the Thai people if I learn Thai, but Iām also not going to do that.
Yet it does seem like it would create a better connection with some specific Thai person, if I were able to roughly understand him or her, rather than not at all (or only in English, which is practically speaking not a symmetrical situation but, formally, it is).
Is there something distinctive about ASL here which makes it different?
My concern with this kind of thing is the impression it creates, that a partial solution is worse than no solution at all. I disagree with that, I think a partial solution is better than no solution at all.
The crux being this:
If my alternatives are the following: put in extra effort to write accessible code, thereby drawing the attention of people who think this way, whom I then have to deal with, or alternately, disengage entirely, guess which Iām going to do.
Please remember that free software is to a large extent written by volunteers, and is both created and provided without fee or remuneration. It also comes with NO WARRANTY, as the licenses tend to style it, including without limitation that of fitness to purpose.
Itās never done, thereās always more to do, and thereās never enough time to do all of it. The saving grace is, if you really need it to do something it doesnāt, well, you can do that yourself.
Maybe not as a language per se, but I can understand a big difference in the needs of those living in a society:
In the latter case, Iāve even heard of people picking up English when playing games.
That was the opposite of what I intended, I am sorry.
A further difference being that the vast majority of the Deaf in the United States are fluent in English, but for obvious reasons this concentrates on the written form. Itās sometimes possible to learn to speak it, in fact, Helen Keller rather famously did so, but itās never easy, and many Deaf donāt try, nor should they be expected to.
The prevalence of smartphones was already a help here, the Hiptop was very popular with the Deaf because the keyboard enabled quick writing. Machine translation of sign language, and in the other direction, speech-to-text, are enabling technologies, remarkable ones at that.
@glfmm I know your intentions were good, and in retrospect I should have moved the thread first and then replied. Iāve seen some remarkably bad behavior in issue boards for FOSS around this general subject, which flavored my response somewhat, and I apologize if I came off as hostile, this was not my intention.
I think most, if not all of us, would like our work to be useful to anyone and everyone, and especially not to shut out people with disabilities, who already have to deal with a lot of that. Thereās also a place for minimalist immediate-mode GUI projects, and almost all mature software starts with someone scratching their personal itches, in a way which inevitably reflects their own fluencies and abilities.
Iāve started work on integrating AccessKit into dvui Integrate AccessKit Ā· Issue #151 Ā· david-vanderson/dvui Ā· GitHub. So, hopefully weāll have a good option very soon.
Iād certainly appreciate help with testing from people who understand the topic better than me, once Iāve got something to show.
Almost all widgets are supported now. Outside of a few issues where we need some upstream fixes (scrollbars and the splitter pane), everything else should support both reading and actions.
Iād welcome any help with testing from, well anyone, but especially people who regularly use assistive technologies. DVUIās demo application contains every widget, so is a good application to test with.
See dvui/readme-accessibility.md for more info on what is working for each widget type and please raise any issues, suggestions etc on the DVUI github, rather than here.
As far as I can tell, this will be the first non-Rust IM GUI that supports accessibility.
As far as I can tell, this will be the first non-Rust IM GUI that supports accessibility.
Iām pretty sure that would be Goās Gio ui library, albeit, only on Android. (which I think might be the first IM GUI period with any accessibility, given it had the feature while AccessKit was still not usable, I think?)
Pretty much everything Gio does is based on opcodes internally, and for Accessebility, they seem to have semantic ops, where a widget writes ops describing itself to the buffer, and at the end of the frame, that information is passed to the app run loop. (but as noted, doesnāt look like any platform aside from Android uses that semantic information)
AccessKit has a C API and makes it very possible to make accessibility user interfaces. One of their C examples goes as far as using SDL2 and creating a virtual UI that technically does not actually exist in anything but the accessibility tree. An accessibility client will tell you that controls exist, but they actually do not.
The downside to AccessKit is that, on Windows at least, you need to use the Windows subclassing feature to force-subclass the window handle to allow AccessKit to do what it needs to do. I do not believe this is a problem on any other platform however. The other issue is that AccessKit is still in itās early development, so may (and does) not implement full support for some widgets (like Tree Views).
If you want to make any immediate-mode UI accessible, AccessKit is where youād start, as it attempts to unify accessibility stacks behind a common interface. Unfortunately, immediate-mode UIs such as DearImGui (that is, ones that work the way it does) canāt be made accessible all that easily because DearImGui has no concept of keyboard input, widget state or really much else, so it becomes extremely hard to know what to pass to AccessKit. On the other hand, toolkits such as TGUI are easier because they have state and are similar to retained-mode toolkits, so making them function with AccessKit is a much easier venture.
You only need to use a subclassing adapter if you donāt have direct control over the mesaage loop on either Windows or MacOS. On DVUIās DirectX11 backend and on Linux, subclassing is not used. But Iāve not seen any practical difference between the two, so not sure why that is such a concern.
Yes, there are widgets not supported, but I didnāt have any problems for basic widgets outside of actions on scrollbars and splitter panes. Tree widgets are supported.
I donāt see why Dear IMGUI couldnāt use AccessKit, it is not soo different to DVUI. Each widget provides a full copy of itās relevant state to accesskit each frame along with the currently focused widget. This only happens if a screen reader is active, so there is no performance impact for other users. It was limited documentation that slowed us down, rather than any architectural limitations with AcessKitās support for IM UIās.
I had a very positive experience using accesskit and the support I got from the team there. It not being a pure C library that I can compile with Zig is the biggest downside.