one feeling that has resonated with me is that accessibility, conceptually, has become a mandatory part of not only how Apple designs its products, but of the Apple ecosystem at large.
one feeling that has resonated with me is that accessibility, conceptually, has become a mandatory part of not only how Apple designs its products, but of the Apple ecosystem at large.
I am happy to report that a long-standing bug has been fixed in Firefox Nightly that prevented aria-activedescendant properly working for freshly inserted nodes.
Good morning! If you speak German, I invite you to follow my second account @MarcoZehe. Over there, as is almost tradition from Twitter, I write (and talk) in German primarily, whereas this account here is English only.
I am totally fascinated by this idea and the points he is making. Still thinking what I’ll do when Twitter clients largely stop working in August.
|[Why email is the best social network||Computerworld](https://www.computerworld.com/article/3267698/email/why-email-is-the-best-social-network.html)|
I just started a second micro blog on here, on a different account, with German language content. And what can I say? Immediately found an accessibility bug, the lang attribute is hard-wired to English, making German sound terrible.
These are brilliant!
This is so true on many levels, and I speak from experience.
Was playing with German Grade 2 braille input in NVDA earlier and found that it suffers the same problems that make German Grade 2 Braille input so utterly useless on iOS. Tomorrow, I’ll test with JAWS to see if it is also affected by this.
[This article](accessibility.blog.gov.uk/2018/05/1…] demonstrates what I also tell all clients I work with when it comes to accessibility. Do real-world user testing. They’ll always find things you missed.
It is really interesting what has come out of Microsoft in terms of accessibility over the past few years. And it is being recognized and used by other big players. This can only be good! Accenture fosters inclusive workspace to empower employees with Microsoft 365
For those of you at WWDC this year, I strongly encourage to check out the accessibility lab Apple have every year. Especially if you develop apps yourselves, you might learn a thing or two and can even ask engineers about specific problems in your apps directly.
Fun fact: Learning the web site and apps of the micro.blog service, I’ve come across some accessibility problems I have not yet seen elsewhere. Both the website and iOS app have some very peculiar handling of text input, for example.
After my first day, I can say that I really like micro.blog. The philosophy definitely resonates with me, and by now, many pieces have clicked into place. I don’t know yet what exactly I’ll be using it for, since my main site is ten years online and very well-known, but we’ll see. For now, it’s getting more familiar with everything and find a custom domain to hook this up to. :) Thanks to everybody who was so welcoming so far!
Yes, you read correctly, I quit. I will not make the 30 days.
The reasons are different from the first time I did this experiment. Back then, there were actual show stoppers of things I needed to be able to do, like entering accented and umlaut characters, using some apps like the German railway company app to book tickets, etc. The reasons were many.
This time, there is really only one deal breaker, which I will come to at the end. But even without that, there are just so many things that get in the way of productivity instead of supporting me be productive, that the point at which the sum of these becomes a chore, has been reached. In German, we have a word called Leidensdruck for this kind of condition. The clinical translation to English, according to dict.cc, is “psychological strain”. But I think this doesn’t entirely cover it in this case. Leidensdruck describes a condition where several items add to strain, or if you ask Google translate, suffering, to a point where it becomes unbearable, and one finds an exit strategy.
Let me reiterate some of the things that actually made me finally give up. First and foremost – and I believe this became apparent in my audio and writing –, it is the scrolling. This gets in the way so often and the annoyance with it has built up to a point that lead to the rant you probably listened to.
Another point is editing. The inconsistencies like the non-turning off selection mode and other quirks just make this a really not pleasant experience. The randomly rearranging menus also don’t add up to a pleasurable experience and get in the way, because I have to concentrate on the technology rather than the actual stuff I want to get done.
Other things, too, add to this. The fact that one can silence TalkBack by setting the device to silent mode without a fool-proof recovery is scary, the bells and whistles are also more distracting than they are useful.
Despite some things that I really like, like the Do Not Disturb enhancements, the choice of browsers and some other apps that I covered in this series, the way the technology gets in the way of my productivity is just too much for me to take. Even after I posted my second post today, the one about navigation solutions, the suggestions I got were symptomatic of that. A little app for this, one other for that piece of the task, oh and the third works great if you turn off the camera bit, etc., Instead of walking leisurely, I would be spending my time standing and switching apps back and forth to get at all the information I need. Sorry, but no! Or maybe I’m just too old for that, I dunno. But anyway: no!
But the real deal breaker hit me earlier when I was on said walk. I was purchasing the DotWalker Pro app to try out its enhanced features and had to enter my password. This was the first time where I had to enter it in a public environment. In this case, it was a quiet street, but even then someone could be near me, trying to glance over my shoulder. And this was when I realized that TalkBack does not have a screen curtain. On iOS, I have this enabled always. If prying eyes want to look on my screen, all they see is blackness. On Android, everything lays plainly before them. And no, turning down the brightness to a minimum does not completely hide the screen from those prying eyes.
One could argue that by using Android, I’d be sharing all kinds of data with Google and any government entity that wants to look at the data anyway. But despite the despicable things the NSA, GCHQ and other agencies are doing to our civil rights and privacy, I am even more worried about someone malicious trying to snatch away my phone and spend my money just because they were successful in reading my password from my fingers working the screen. And even if I think I am holding the phone in a way others can’t see it, there is no guarantee this is the case. I had my debit card stolen from me once because the thief was able to look up the PIN I had just entered into an ATM, in a way I thought had been safe. Once burned, twice shy!
I thank you all for sticking with me for as long as you have, and commenters like Anouk who had great feedback and suggestions to quite a number of my posts! Also, thanks to all the feedback I got through other channels, too! This was once again an interesting experience, but I will stick to iOS as my primary mobile OS for the foreseeable future. I will continue to work as hard on Firefox for Android accessibility as I have in the past, not because my employer expects me to, but because I am convinced that this is the best and most accessible browser on Android, and I want it to stay that way.
But for my personal use, this experiment once again concludes with a thumbs down, but for different reasons.
So long, and thanks for the fish!
OK, after I posted my earlier bit on editing, I had some time, and good enough weather, to go outside and play a bit with a few navigation apps. I had already tried Google Maps in combination with Yelp while in Toronto on day 6 of this experiment, so I didn’t bother repeating that.
Instead, I looked at what both DotWalker and Intersection Explorer had to offer me. Those were two apps that were recommended by members of the eyes-free list.
Let’s deal with Intersection Explorer first. It uses finger swipes to virtually walk along streets in various directions. One can drag the finger in a particular direction and is told if there is a next intersection in that direction, and how far that is. However, I found several problems with this app.
For one, it does not recognize all street names. I live in a small street that Google Maps has no problems finding, but for Intersection Explorer, this and other surrounding streets are just unnamed roads. Only the main lanes and mid-size streets in the vicinity are recognized.
Second problem is that it is not sensitive to the direction the phone is actually pointing. In Europe, many cities aren’t laid out in the same symmetrical, chessboard-like fashion as many cities in north America. At least in Germany, we are also not used to using directions such as “west” or “north” to give directions. So when I am standing on the pavement, I don’t necessarily know which direction I am facing, especially if the sun doesn’t shine. I turned around, but the display of Intersection Explorer remained unchanged. Even after I had it refresh its current location, it still basically gave me a static north-south oriented view of the vicinity that was not in relation to the direction I was actually facing. So while this is useful in some scenarios when exploring a vicinity offline, it is not useful when walking.
DotWalker is more what I would expect, but still not quite what I need. It is very point-oriented. The good thing: It has a compass that immediately tells me which direction I am facing or walking. So in theory, I could use this to find out my bearing, go back to Intersection Explorer, turn the phone in my hand approximately so that its view of the world reflects the direction I am facing, and then try to find out where I need to go to by exploring that way. So I’d then need to remember that east is not to the right, but maybe to the bottom left. This is certainly doable, but quite cumbersome.
Another thing that DotWalker, not even the Pro version which I briefly purchased for this experiment, does not do for me is simply accompanying me while I walk without aim. I may just want to leisurely walk around the block and maybe just explore the surroundings without actually going somewhere specific. While the compass tells me the direction, I found no way of actually just letting it tell me automatically that I am on a particular street, passing an address, approaching an intersection, and if I turned onto another street, tell me about that. It insists on wanting a specific point of interest that it wants to track. Not even the Look Around feature works without points of interest.
One way I often use BlindSquare GPS for is to simply walk in a particular direction and broaden my horizon by letting it tell me what street or intersection I am reaching. In fact, this is how I gradually broadened my knowledge of my new living surroundings bit by bit. Combined with the interface to FourSquare, I even now know a lot more shops that are there than I would have ever been able to find out otherwise. I have found no way to even remotely do that with DotWalker, and I also didn’t find any other app on Google Play that does something similar. There seems to be an app called APH Nearby Explorer, but that is not available to me, it seems to be US or north America specific. So I couldn’t try that as an alternative.
I’m sorry to say, but current Android offerings definitely don’t give me what I want in terms of navigation and leisure walking. All offerings I tried are very insistent on me having a specific aim to walk to. And this does definitely not cover my needs.
Today, as was also suggested the other day, and also as a follow-up from yesterday’s post, I will talk a bit about how editing works in Android. Note, however, that your results may vary. This is fairly new, in fact these editing functions didn’t even exist yet when I did my initial experiment last year, and some of this stuff requires Android 4.3 or newer. So if you’re on an older version, even though TalkBack may be up to date, it could be that these described functions aren’t available to you.
For this example, I am using an app I use often, the Twitter app, and am composing a new tweet. But first, let me quickly back up and talk about the global and the local context menu functions TalkBack offers. These are menus that come up if you do a double-motion gesture. For the global context menu, the gesture is down, then right, for the local context menu, which we will be using, it is up, then right. These gestures require a lot of practice, and even though I’m pretty skilled with them now, they still have a tendency to fail, as do the other double-motion gestures to bring up the notifications, simulate the Back button, etc.
Once either of these menus is activated successfully, which is indicated by a slightly ascending series of popping sounds, the screen accepts a gesture that is basically a circular motion of one finger. So you place your finger roughly at the top left and move in a circle clockwise or counter-clockwise to explore the available options. In the global context menu, these are:
<ul></ul>Pause feedback: This will pause talkBack feedback and turn off explore by touch. It can be reenabled by locking and unlocking the device, or trough notification center, although that might require sighted assistance.<ul><li>Read from top: This will start reading screen items from the top left. In some applications, it may also scroll the screen if possible. In Firefox, you can read whole web pages with this.</li><li>TalkBack Settings: A quick way to access TalkBack settings.</li><li>Spell last utterance: Self-explanatory.</li><li>Read from next item. Uses the last item you touched as a reference, and starts reading from the next item. You can touch the heading of an article on a web page, for example, do the down, then right gesture, and choose this item to read the article, including scrolling. Note that it doesn’t suppress the earcons while doing so, you’ll hear it happily slowly ascending as it scrolls.</li><li>Quick navigation menu: Grabs all items currently visible on the screen and puts them in a circular menu and allows to quickly jump to it.</li><li>Cancel, which is in the center of this circle, not part of it: Exits the menu.</li></ul>To actually execute one of those menu items, lift your finger, similar to a keyboard key. In the case of the Quick navigation menu, this will open another such menu, indicated by another series of ascending pops.
The Global Context Menu is pretty static. The local context menu, however, is not. There are a couple of items that appear regularly, such as:
<ul><li>Change Granularity, which effectively changes what left and right swipes move by. The options usually are Default, Character, Word, Line, Paragraph, and page. In some situations, this menu immediately appears when nothing else is available, in other cases, it appears as one of the sub menus on the main local context menu.</li><li>Cursor control menu: This is only available in text fields, and only if something is input. This is where all the below magic happens. It offers some common functions such as moving to beginning and moving to end. Other items are inserted and removed dynamically, depending on whether text is selected or not.</li></ul>The Change Granularity menu also applies when in text fields, where it actually controls the cursor. If you’re familiar with how editing works in iOS, this is not dissimilar. Rotor settings of character, word and line also move the cursor when focused on a text field when up and down flicks are used.
So now let’s do a bit of editing. Going back to Twitter, I start a new tweet by focusing and double-tapping the “What’s happening?” item at the bottom of the Home timeline. I then enter something bogus like “This is a test.”
What I want to do now, is change “is” into “may be”. To move my cursor to the word “is”. After I have typed my sentence, I do an up, then right motion gesture to bring up the local context menu and select Change Granularity. In the sub menu, I change to Words Granularity.
Next, TalkBack announces that “test” is the current word. I swipe left twice until I hear the word “is”.
Next, I enter the local context menu again and select Cursor Control menu. From this sub menu, I select Start Selection Mode.
This will change how my left and right swipes work yet again. In this mode, every swipe in one direction selects text, a swipe in the other deselects. So if I start swiping right, the selection will be extended in the forward direction. Sweeping left will unselect. If I start sweeping left, it will select backwards, and swiping right will unselect.
I then swipe to the right. And here’s a gotcha! The granularity we selected above is still in effect. It now selects the whole word “is”, excluding the space character following it. In this case, this is what we wanted anyway, but if you want to select by character when you first moved by word to get to where you wanted, you have to change the granularity to character before starting to select something. Note also that TalkBack didn’t announce that the word “is” got selected. It only spoke the word.
Now that “is” is selected, we can type the m for maybe. The keyboard is still visible. TalkBack will then say something like “is replaced by m”. We can then continue to type the other characters for “may be”. Remember that we’re still not at character granularity yet, so I will switch to it now via the local context menu and the Change Granularity sub menu, to be able to proof-read what I just typed. I am told that I am on a space character, and can swipe left and right to check that “is” actually got replaced by “may be” as I intended.
And here’s the next gotcha! Selection mode didn’t end when the selected text got replaced by what I typed. So my proof-reading actually caused the text I swiped over to get selected. I have so far not found a way to read only the selected text in a text field. It doesn’t get indicated when selecting or unselecting, when swiping to the field again from another control, etc. The fact that selection mode didn’t end when I selected the text inadvertently can lead to some serious data loss when one doesn’t remember to explicitly turn off selection mode first. And what makes this even worse is that there’s no undo functionality that I can see. So if I had, say, accidentally hit a key and replaced 3 paragraphs of text with a period, there is currently no way to undo that. I tried the local context menu of course, shaking the device or looking on the keyboard for an undo function. Nothing. Excuse me, but this is scary!
Back to some more explanations: If text is selected, the Cursor Control menu receives a few more items such as Cut and Copy, and if something is on the clipboard, also a paste item. So this menu is pretty dynamic and indeed very context sensitive.
Conclusion: Editing is now certainly possible, but there are a few gotchas that one really needs to be aware of that are not immediately obvious. In fact, if not careful, they can lead to serious data loss. For one, selection mode has to be terminated explicitly. Second, selected text is not indicated. Third, there is no undo function, or it is so well hidden that I haven’t found it. The Change Granularity sub menu, at least on my device, also had a tendency to change the location of items, so the different granularity levels didn’t always appear in the same location, or item order, as at other times. When one has to concentrate on keeping track of the selection mode, selected text and other parameters, this fact doesn’t really help much. :/
I tested this on both my Nexus 5 running KitKat as well as the work nexus 7 running Android L preview. Both show the same behavior.
My conclusion is that, while editing is certainly possible, it has so many unpolished items where I have to fight the quirks of the technology rather than getting actual work done, that it is standing in my way more than helping me be productive.
OK…As easily as the blog posts wrote themselves up until now, this one was really hard. In fact I went all day without a good idea about what this post would be about. I had some requests from comments, one being testing some more outside navigation, one other being testing Spotify for Android, and a third being the question how connected Bluetooth keyboards work nowadays.
The first request can be fulfilled, sort of. I am not a heavy Spotify user. In fact whenever I use the service, I end up just looking for a specific album by an artist and listening to that. Can do that on iTunes, too and just buy the album. ;) I never actually got into this random selection thing Spotify offers. But I’ll re-download the app in the coming days and give it a spin I think.
The second is definitely one still on my list, I was just kept from doing it until now by either bad weather or my schedule.
The third one was one I finally decided to tackle. I had to break out the Bluetooth keyboard I have from a shelf, where it was buried under several other items. The batteries had to be replaced, and then i was ready to go.
Pairing went without a hitch. Was asked to enter a PIN on the keyboard, and pairing was instant.
Also, I have the TalkBack 3.5.2Beta2 on my device, which has enhanced keyboard support. It allows to navigate by items similar to ChromeVox or VoiceOver on the Mac or iOS. You can also activate items and do some other stuff. And of course, there is the normal arrow key navigation within apps that have set proper next or previous focus items.
When navigating through lists, I immediately saw what Andross reported in a comment to my post about scrolling yesterday: When reaching the edge of the screen and it scrolls, one of several inconsistent things can happen. Either the same item gets repeated, nothing gets spoken, or some other seemingly radom item gets spoken in the meantime. It also seems to be dependent on the list or how it is being utilized. In Settings it works reasonably well, in Twitter and other apps it tends to fall over more often.
However, the real problem started when I tried to write my first test tweet. Every key I typed would be repeated over and over until I hit another key, which then would be repeated over and over. I was not holding them down, they were inserted automatically. After several minutes of experimentation, I found that the only way to get rid of this behavior was to shut down TalkBack completely, and then restart it with the keyboard still paired. This beta (don’t know about any other version) definitely doesn’t seem to take kindly to having a keyboard paired while it is running. Once TalkBack had been turned off, the keyboard started behaving normally in the edit field, which was confirmed by my wife. Also, after she restarted TalkBack for me, because the key stroke would not work, the keyboard continued to behave normally. Note, however, that this only showed itself once I started typing something in the text field. Cursor or item navigation before that were perfectly normal.
I was asked in a previous comment or tweet (sorry, don’t recall), if cursor keys would speak the letter or line being moved to. The sad answer is: Nothing gets spoken if navigating inside a text field using the arrow keys. Neither by character, nor by line. In essence, I had no control over where my cursor was once I started moving the arrows and not just type. This seems to have been an issue in TalkBack for some time, and is definitely still a problem in the current beta.
I also noticed several other oddities with the device as soon as the keyboard was paired. For one, I could no longer adjust the media playback volume through the phone’s rocker switch at the side. It would always give a bumping error sound and remain on the level that it was at. As soon as I unpaired the keyboard, things returned to normal.
Also, the screen sometimes acted funny when touched. TalkBack either didn’t react at all, or reacted with a delay and then caught up with all motions that happened in the 2 to 3 seconds before it unfroze. This also went away once the keyboard pairing was gone.
The fact that you cannot edit your text after you entered it using the paired keyboard, quite defeats the usefulness of having a keyboard paired. Yes, typing is faster, but this is pretty pointless if you have to go back to using the local context menu in TalkBack and switch granularity levels to character or word or whatever to control the cursor.
One more thing about the keyboard that acted up before TalkBack had been shut down and restarted: I did not observe this when I was using the braille keyboard the other day when experimenting with my Focus 14 blue. I had paired that without restarting TalkBack afterwards. So this definitely is localized to the combination of TalkBack running while a keyboard is being paired, and then using that in a ttttttttttttteeeeeeeeeeeeeeeeeeeeexxxxxxxxxxxxxxxxxxxxxxtttttttttttttttttttttt fffffffffffffffiiiiiiiiiiiiiiiiiiiiieeeeeeeeeeeeeeeeeeeellllllllllllllllllllllddddddddddddddddddddddddddddddddd………………………
OK, as already announced yesterday, today’s post is all about scrolling. And since I decided to do this post, things actually got more pronounced. This may have been because I was giving this even more attention, but the more likely explanation is that this has been creeping up on me through day to day use and finally reached a boiling point.
Android has several ways of scrolling lists or pages of information. You can slide two fingers up and down to scroll down or up respectively. You can do a quick double-motion gesture right than left to go to the next page, or left then right to go to the previous one. Page in this case meaning visible screen. In fact, this latter was introduced shortly after my original experiment “Switching to Android full-time” post in summer of last year. Also you can swipe continuously, and when at the end of one screen, it will flip to the next. Or when swiping left, to the previous, respectively.
And here’s already one problem: The last item you swiped to on one screen gets repeated almost every time when TalkBack flips pages. There is almost never an instance where there is a smooth transition from one screen to the next. Try it in Settings: Open Settings, and from the top most item, swipe to the right and listen to what happens when the screen is flipped to the next visible page. This is quirky at first, becomes annoying later, and completely unnerving when it hits you the 30th time in one day, because it constantly interrupts your train of thought or your enjoyment of a timeline, etc.
The next problem is with the two finger sliding: There is no defined way to go an exact number of items. Depending how fast and far one slides, it might scroll more or less items. Especially when one is blind and more so beginners, this is hard to deal with because there is no security here. The distance is virtually unpredictable.
Going back to the item that one reaches at the top or bottom of the currently visible screen: When double-tapping to activate it, it might happen that either nothing happens (best case), or something entirely different gets activated (worst case) than was spoken and intended. That is because the item one swiped to, and which just got read, is only partially visible on screen. Since TalkBack tries to simulate a finger tap in the center of the item, it might end up hitting a spot somewhere completely different. It often happens to me that, if I want to open a tweet to reply to it, I am thrown into a new Tweet composition screen instead, or open one of the Discover or Direct Messages screens because these are the things that by chance happened to be at the spot where TalkBack thought it needed to simulate a tap. The only solution to this problem is to try and slide the list just this tiny bit up or down so the tweet becomes visible fully, and TalkBack has a chance to actually activate it. Between reading the tweet, and the reaction to reply, and then actually getting to the point where one can reply, 30 seconds might have passed that totally interrupt the thought, or may cause it to even have been forgotten over having to fight the technology.
Oh and then there is the topic of trying to quickly get to the top or bottom of a long list. On iOS, there is this quick way of focusing the status bar and then double-tapping to scroll any list to the top instantly. Last night, while fighting jetlag and trying to find sleep, I asked on Twitter whether there is a way to do this in Android, too. There were several replies, some suggesting there is no way, others suggesting the quick double-motion gesture up then down might do the trick, until today, I received a reply that flicking with two fingers rapidly in the list will cause an accellerated scroll of the list in question.
After quite some practice, I finally got it to work for the first time. And like the sliding gesture, this one has a grave problem of unpredictability. It does an accellerated scroll, oh yes! But you never know if it will actually make it to the top or bottom, or if it will strand like a fish out of water half-way. I found that repeating the gesture several times rapidly accellerates the scrolling further, sending it into a real frency where it might even cause new content to load because it suddenly miraculously reached its destination while I was still flicking with two fingers.
These many levels of unpredictability are frustrating, to put it mildly. Technology constantly gets in the way of productive work here. The insecurity of the actions add up toa level where at least I feel I can no longer really trust my device. I can only try to imagine how this must feel to a beginner who is just getting familiar with the concept of a touch screen. It is often argued by the BLV Android community members that iOS rotor gestures are similarly unpredictable. I can only say: This is nothing compared to the level of unpredictability the different scrolling behaviors and quirks throw at a user. Yes, the rotor gestures are hard to get right at first, too. But there is a system of predictability to what a single finger flick up and down actually does in every situation even when the rotor is not being set to something.
If you would like to actually hear this set of behaviors and problems in action, for this post I actually did an accompanying audio recording that walks through the items I’ve discussed above. It’s roughly 20 minutes long. Enjoy! Or don’t. ;)