One of the features requested rather often for the Unity Accessibility Plugin is WebGL support. I’m happy to announce that it is almost ready to ship. In this post I’ll explain a bit about the problems WebGL presents – and their solutions.
Check out the UAP WebGL Demo
WebGL and screen readers
My latest update to the UAP (v1.0.4) includes experimental support for WebGL. Experimental is my way of saying that the screen reader portion of the plugin works – aka UI navigation and interaction – but there is no speech synthesis. To use the plugin with WebGL, developers would have to write their own or purchase a third party text-to-speech plugin and connect it with the UAP.
This is because browsers do not have speech synthesis themselves. They rely on the user having a screen reader installed on their system instead. This works for text based content, but not for 2D and 3D content rendered in Unity via WebGL. Unity cannot access a user’s screen reader via the browser.
I’m currently working on the v1.0.5 release of the plugin to solve this issue.
All Text-to-Speech isn’t created equal
There are of course high quality speech synthesis plugins available on the asset store, which can be made to work with UAP with just a few lines of code. But I would like for the plugin to be as self-contained as possible and work as much out-of-the-box as I can get it. So I want something I can include with the plugin.
I could possibly get some low quality speech synthesis up and running based on some of the free and open source tech out there. But there’s no point in including a crappy text-to-speech engine with the plugin that produces barely intelligible audio.
Do you want your app to sound like this?
Keep reading – it gets better!
Google Cloud Text-to-Speech
Google offers a cloud based Text-to-Speech system. Cloud-based means that the Unity game would send the text it wants spoken to the Google server and Google sends it back a sound file with the generated speech.
I’m looking for a solution specifically for WebGL, meaning players will already be online. So an internet-based solution is a viable option.
And the quality is great. Here’s the same sentence from the previous example, generated from Google:
Go ahead and listen to the first one again, to compare…
I was a little worried about the speed of the system. With this being a cloud based solution, the process is a little slower than a screen reader, which runs directly on the end-user’s system. Not only does the speech have to be synthesized, but it has to be send as an audio file via the internet to the end user.
But after I’ve tested the system both in the Unity Editor and in an actual WebGL app, I found that it’s not bad at all.
If you want to know how good or bad the latency is, here’s a demo:
UAP WebGL Demo
Some assembly required – but not much
The upcoming v1.0.5 release of the UAP plugin will include fully integrated support for the Google Text-to-Speech service. All a developer will have to do is to set their API key in the plugin settings and it will work. I will include a guide with lots of pictures on how to create the API key.
Because the service is part of the Google Cloud, it isn’t entirely free and you need a Google account for it. You get a $300/12 months free trial period though, which is pretty awesome.
Here’s the link, if you want to test it out:
No, I’m not getting paid by Google, I just think a direct link is safe and convenient.
Other TTS Plugins
No solution is right for everyone. Developer’s might not want to use the Google Cloud API, or they might already own a different Text-to-Speech plugin.
With the new release, I reworked the way UAP handles Text-to-Speech, to make it very simple to connect to any other TTS plugin that the developer might want to use. Not that it was complicated before, but it’s just gotten a lot easier. I also added step-by-step instructions into the documentation.
Screen Reader Detection
I’ve written before about how important it is to detect whether your users are or (more importantly) are not using a screen reader.
Read the post here: Playing Hide-And-Seek with TalkBack
But it is near impossible to auto-detect whether a screen reader is installed or running on a user’s system when all you have is a WebGL app.
The reason is simple. The app could run in a myriad of different browsers and operating systems, and each system has a range of different screen readers that could or could not be installed. There is simply no way to get that kind of system information, especially not from within a browser.
In other words, on WebGL, UAP cannot detect whether it should turn on/off automatically. The only way to work around this is to pass a parameter to the game, for example via the URL. This is unfortunately still up to the developer.