that attempts to
identify and interpret what is being displayed on the screen (or, more
accurately, sent to standard output
, whether a video monitor
is present or not). This
interpretation is then re-presented to the user with text-to-speech
, sound icons, or a Braille
output device .
Screen readers are a form of assistive technology
(AT) potentially useful
to people who are blind , visually
impaired , illiterate
or learning disabled
, often in combination
with other AT, such as screen magnifiers
.
A person’s choice of screen reader is dictated by many factors, including
platform, cost (even to upgrade a screen reader can cost hundreds of U.S.
dollars), and the role of organizations like charities, schools, and
employers. Screen reader choice is contentious: differing priorities and
strong preferences are common
Microsoft Windows
operating systems have
included the Microsoft Narrator
light-duty screen reader
http://en.wikipedia.org/wiki/Microsoft_Narrator
since Windows 2000 .
Apple Inc.
http://en.wikipedia.org/wiki/Apple_Inc.
Mac OS X and iOS
include VoiceOver
,
http://en.wikipedia.org/wiki/OS_X
VoiceOver is a a feature-rich screen reader.
More about VoiceOver is found here:
http://en.wikipedia.org/wiki/VoiceOver
The console- based Oralux Linux
distribution
http://en.wikipedia.org/wiki/Linuxships with three console screen-reading
environments: Emacspeak ,
http://en.wikipedia.org/wiki/Emacspeak
Yasr and Speakup.
BlackBerry 10 devices
http://en.wikipedia.org/wiki/BlackBerry_10
such as the BlackBerry Z30
include a built-in screen reader.[1]
There is also a
free screen reader application for older less powerful BlackBerry (BBOS7 &
earlier) devices.[2]
There are also popular free and open source
screen readers, such as
the Orca
http://en.wikipedia.org/wiki/Orca_(assistive_technology)
for Unix-like systems and
NonVisual Desktop Access
for Windows.
http://en.wikipedia.org/wiki/NonVisual_Desktop_Access
The most widely used screen readers[3]
are separate
commercial products: JAWS
from Freedom
Scientific
,
http://en.wikipedia.org/wiki/Freedom_Scientific
Window-Eyes from GW Micro,
http://en.wikipedia.org/wiki/Window-eyes
Dolphin Supernova by Dolphin
,
http://en.wikipedia.org/wiki/Dolphin_Computer_Access
System Access from Serotek,
www.serotek.com
and ZoomText Magnifier/Reader from
AiSquared
http://en.wikipedia.org/wiki/ZoomText
are prominent examples in the English-speaking market. The opensource
screen reader NVDA
is gaining popularity.
*III. Types of screen reader
1. Command Line (text) screen readers[edit
]
In early operating systems,
such as MS-DOS, which employed
command-line
interfaces (CLIs), the screen display consisted of
characters
mapping directly to a
screen buffer in
memory and a
cursor position. Input was
by keyboard. All this information could therefore be obtained from the
system either by hooking the flow of
information around the system and reading the screen buffer or by using a
standard hardware output socket[4]
and communicating
the results to the user.
In the 1980s, the Research Centre for the Education of the Visually
Handicapped (RCEVH) at the University of Birmingham developed Screen Reader
for the BBC Micro and NEC Portable.
[5]
[6]
Graphical screen readers[edit
]
Off-screen models[edit
]
With the arrival of
graphical user interfaces (GUIs), the situation became more complicated. A
GUI has characters and graphics drawn on the screen at particular positions,
and therefore there is no purely textual representation of the graphical
contents of the display. Screen readers were therefore forced to employ new
low-level techniques, gathering messages from the
operating system and using
these to build up an “off-screen model”, a representation of the display in
which the required text content is stored.
[7]
For example, the operating system might send messages to draw a command
button and its caption. These messages are intercepted and used to construct
the off-screen model. The user can switch between controls (such as buttons)
available on the screen and the captions and control contents will be read
aloud and/or shown on refreshable
Braille display.
Screen readers can also communicate information on menus, controls, and
other visual constructs to permit blind users to interact with these
constructs. However, maintaining an off-screen model is a significant
technical challenge: hooking the low-level messages and maintaining an
accurate model are both difficult tasks.
Accessibility APIs[edit
]
Operating system and application designers have attempted to address these
problems by providing ways for screen readers to access the display contents
without having to maintain an off-screen model. These involve the provision
of alternative and accessible representations of what is being displayed on
the screen accessed through an
API.
Existing APIs include:
* Apple Accessibility API
[8]
* AT-SPI
* IAccessible2,
*
Microsoft Active Accessibility (MSAA)
* Microsoft UI
Automation
* Java Access
Bridge [9]
Screen readers can query the operating system or application for what is
currently being displayed and receive updates when the display changes. For
example, a screen reader can be told that the current focus is on a button
and the button caption to be communicated to the user. This approach is
considerably easier for the developers of screen readers, but fails when
applications do not comply with the accessibility API: for example,
Microsoft Word does not comply
with the MSAA API, so screen readers must still maintain an off-screen model
for Word or find another way to access its contents. One approach is to use
available operating system messages and application object models to
supplement accessibility APIs: the Thunder screenreader operates without an
off-screen model in this way. (Note: the latest version of Thunder also
includes an off-screen model but has one that does not involve installing a
device driver. Consequently it can be used on a memory stick without any
files needing to be installed.)
Screen readers can be assumed to be able to access all display content that
is not intrinsically inaccessible. Web browsers, word processors, icons and
windows and email programs are just some of the applications used
successfully by screen reader users. However, using a screen reader is,
according to some users, considerably more difficult than using a GUI and
many applications have specific problems resulting from the nature of the
application (e.g. animations in Macromedia Flash) or failure to comply with
accessibility standards for the platform (e.g. Microsoft Word and Active
Accessibility).
Self-voicing applications[edit
]
Some programs speak or make other sounds so that they can be used by blind
people or people who cannot see the screen. These programs are termed
self-voicing and can be a form
of assistive technology
if they are designed to remove the need to use a screen reader.
Cloud-based screen readers[edit
]
Some telephone services allow users to interact with the internet remotely.
For example, TeleTender can read web pages over the phone and does not
require special programs or devices on the user side.
Web-based screen readers[edit
]
A relatively new development in the field is web-based applications like
Spoken-Web that is web portal, managing content like news updates, weather,
science and business articles for visually impaired or blind computer users.
Other examples are ReadSpeaker or BrowseAloud that add
text-to-speech functionality
to web content. The primary audience for such applications is those who have
difficulty reading because of learning disabilities or language barriers.
Although functionality remains limited compared to equivalent desktop
applications, the major benefit is to increase the accessibility of said
websites when viewed on public machines where users do not have permission
to install custom software, giving people greater ‘freedom to roam’.
With the development of
smartphones, the ability to listen to written documents (textual web
content, PDF documents, e-mails etc.) while driving or during a similar
activity in the same way that listening to music, will benefit a much
broader audience than visually impaired people. The best-known examples are
Siri for
iOS, and
Google Now and
Iris for
Android. With the
release of the Galaxy S
III, Samsung also introduced a
similar
intelligent personal assistant called
S Voice. On the
BlackBerry 10 operating system, their
Z30 smartphone also features
spoken interaction features, which are similar to the other
mobile operating
systems.[ citation
needed]
This revolution depends on the quality of the software but also on a logical
structure of the text. Use of headings, punctuation, presence of alternate
attributes for images, etc. is crucial for a good vocalization. Also a web
site may have a nice look because of the use of appropriate two dimensional
positioning with CSS but its standard
linearization, for example, by suppressing any CSS and Javascript in the
browser may not be comprehensible.
Screen reader customization[edit
]
Not only do screen readers differ widely from each other, but most are
highly configurable. For example, most screen readers allow the user to
select whether most punctuation
is announced or silently ignored. Some screen readers can be tailored to a
particular application through
scripting. One advantage
of scripting is that it allows customizations to be shared among users,
increasing accessibility for all.
JAWS enjoys an active
script-sharing community, for example.
Emulators[edit
]
* Fangs screen
reader emulator – An open source Mozilla Firefox extension that simulates
how a web page would look in
JAWS.
Verbosity[edit
]
Verbosity is a feature of screen reading software that supports
vision-impaired computer users. Speech verbosity controls enable users to
choose how much speech feedback they wish to hear. Specifically, verbosity
settings allow users to construct a mental model of web pages displayed on
their computer screen. Based on verbosity settings, a screen-reading program
informs users of certain formatting changes, such as when a frame or table
begins and ends, where graphics have been inserted into the text, or when a
list appears in the document.
Language[edit
]
Some screen readers can read text in more than one
language (e.g., Chinese
[10]), provided
that the language of the material is encoded in its
metadata. Some screen reading
programs also include language verbosity, which automatically detects
verbosity settings related to speech output language. For example, if a user
navigated to a website based in the United Kingdom, the text would be read
with an English accent.
See also[edit
]
Description: Description:
http://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Wiktionary-logo-en.
svg/37px-Wiktionary-logo-en.svg.png
Look up screen
reader in Wiktionary, the free dictionary.
* http://en.wiktionary.org/wiki/screen_reader
*
* List of
screen readers
* http://en.wikipedia.org/wiki/List_of_screen_readers
* Screen magnifier
* http://en.wikipedia.org/wiki/Screen_magnifier
* Self-voicing
* http://en.wikipedia.org/wiki/Self-voicing
* Speech processing
* http://en.wikipedia.org/wiki/Speech_processing
*
* Speech
recognition
* http://en.wikipedia.org/wiki/Speech_recognition
* Speech synthesis
* http://en.wikipedia.org/wiki/Speech_synthesis
Referencescan be found at the link below.
http://en.wikipedia.org/wiki/Screen_reader]]>
Related