Haiku needs a screen reading utility
|Reported by:||richienyhus||Owned by:||mmu_man|
All modern operating systems now come bundled with a basic level of accessibility features, namely a screenreader. While these features are out of scope for Haiku R1, these features should be within the scope of Haiku R2.
Screen readers are made up of three parts: A backend Speech synthesis system An API for 1stparty & 3rdparty access to the Speech synthesis library An OS bundled screen reading utility for accessibility
For instance, Orca is the Gnome screen reader that communicates with the Accessibility Toolkit via the AT-SPI API. In Android you have Google Text-to-Speech, which is used by Google Talkback to provide speech feedback on what is being enacted, displayed or selected. In Windows, Microsoft Narrator uses the Microsoft Speech API.
BeOS did have 3rd party applications, TalkBox used the festival synthesis library and SpeakIt which used a database of spoken words. The author of TalkBox has offered to open source it in the past. However before creating or adapting an utility, we need a speech synthesis library for it to use.
|EkonFain for CKJ||C++||GPL|
Both 'Festival' and CMU's 'Festival Lite' use the FestVox dev tool. These two would be the best choice, with the lightweight flite as default and hevywight Festival swapped in if needed.