This has implications for software development in general. When a program or utility is "web based", i.e. accessible through a browser, with some common sense restrictions on the underlying html and javascript, it is easily adapted to a wide range of disabilities. Each user employs the browser that caters to his particular needs and preferences. The interface is automatically tailored to the individual - no additional programming required. The semantics of the data, and its representation on the screen, or in speech, or on a braille display, have been neatly separated. Of course this approach doesn't work for all applications, (a blind user isn't going to play Flight Simulator), but it can be employed in many situations. Even a toolkit, such as Microsoft VB, might be enhanced to create interactive web pages, instead of using the screen and the mouse. If this project proves feasible, a wide variety of common VB applications will become accessible over night. The key is the accessible client, in this case a command line browser, combined with a suite of streaming applications that use these adapted clients as front end programs.
The benefits of this approach are not limited to the totally blind. A color-blind individual might use his browser to change the color of the background, text, headings, and hyperlinks to improve the contrast, while A user with low vision might increase the font size. Disabled users, and folks who simply prefer their text in a particular font, are hoping for a software revolution that sequesters functionality within the application, and leaves the details of the interface unspecified, to be determined by the wants and needs of the individual.
Once a student arrived safely at the computer center with cards in hand, he might make some last minute corrections to his stack, then feed the cards, at a rate of ten per second, through the card reader. This created a "batch job", which resulted in a printout some 20 minutes later. Wait times were highly variable, depending on load, which is why many students worked at night. Having placed the cards carefully back in his box, our weary student anxiously waits for the results of his labors. Did the program compile? Did it run? Was there an error in logic? Can he turn his printout in for a grade, or is there more work to do? If anything has to be changed, he goes back to the punch machine, hammers out new cards, slides them into position, and walks back to the card reader for another run. The smallest typo represents another hour's work. It was not unusual to see bleary-eyed students stumbling back into the dorm at 2 or 3 in the morning, card box under one arm and a stack of printouts under the other.
Imagine my joy when the University installed interactive teletypes! These look like electric typewriters, but the keystrokes are transmitted directly to the central computer. When you hit return, the computer responds, then waits for your next command. The interface had become a dialog, which clipped along at 110 bits per second, or 300 bits per second if you glommed onto one of the newer teletypes. The paper retained a written transcript of the entire session; your commands and the computer's responses. These machines are all but forgotten, except for the vestigial letters tty, which are an abbreviation for teletype. The software that facilitates communication between you and your computer, through the keyboard and screen, is still called a tty today. Type `tty' into any Unix or Linux computer, and it will tell you which tty driver you are using, e.g. /dev/tty1 on console 1. A large Unix machine can have hundreds of tty drivers, supporting hundreds of simultaneous users.
The clatter of the teletype was an annoyance for most, but it was a blessing for me. I knew when the computer had responded, and the nature of that response. If a volunteer reader was not available, and the homework assignment was modest, I could log onto the system, type my program into the editor, compile the program, and run the executable, based solely on the clicks of the teletype. After my roommate read through the printout and verified the results, I tucked it away in my notebook and turned it in the next day for a grade. Although I now have a speech synthesizer at hand, I still miss the audio feedback that was an unintentional feature of the mechanical teletype. To this end, I modified the Linux tty driver to create similar sounds using the PC speaker. When the computer sends text to the screen, soft clicking sounds accompany the nonspace characters, while a longer swoop indicates a new line, as though the print head was swinging back to the left. These modules are available from the drivers directory in the following project.
git clone https://github.com/eklhad/acsint
The chirps and clicks are subtle, and are easily ignored by those around me; yet they form an important part of my audio interface. Like the systems of yesteryear, my tty tells me when the computer has responded to my commands, and the quantity and format of that response, even before my synthesizer has spoken a word.
This project also contains a linear speech adapter, the only one of its kind. Like the paper teletype, this adapter retains a log of all tty output, and allows the user to review that log, reading the entered commands and the computers responses. All other adapters, whether on MS-dos, Windows, Mac-OS, or Linux, read the words or icons on the screen. My adapter can read screen memory as well, but it typically runs in linear mode, which is optimal for the command line interface. This adapter, and various applications such as edbrowse, all work together to present a new (i.e. old) paradigm, a paper teletype inside your computer.
Over the next few years, universities around the country replaced their paper teletypes with cathode ray tube terminals, also known as CRTs. Trees everywhere heaved a sigh of relief; yet the interface was still the same. A user types a command, and the computer responds on the next line. The dialog continues.
Although the combination of ed and nroff was primitive by today's standards, it was perfect for me. I used ed to create documents, inserting formatting tags where appropriate, and the resulting pages were comparable to those created by my sighted colleagues who were forced to use the same text based tools. Needless to say, this was not tolerated for long. Screen editors such as vi and emacs quickly appeared, followed by word processors such as Word Perfect and MS Word. For the first time, you could see what your document would look like before sending it to the printer. Once again, trees around the world were granted a reprieve, and everyone who touched a computer became more productive over night - everyone but me. Yes, screen readers allowed me to roam about and read the text, but I was still processing the data linearly. The benefits of a two dimensional search and scan were not available to me, and to think otherwise is to live in a state of denial. So I continued to use ed and nroff to create programs and documents for Bell Labs. I even ported ed to the IBM PC, thus giving me the same linear interface at work and at home.
<I>Goodnight Moon</I>
<br><font size=-1>by Margaret Wise Brown</font>
Once again I am in the minority. Most web developers use graphical design tools such as MS Front Page or Dreamweaver, which hide the technical details of html. The interface is similar to a word processor. Arrange the text and pictures on the screen as you would like them to appear on your website, and the tool generates the appropriate html. This works well for others, but for me, the benefits of a two dimensional representation are rendered academic, as my speech adapter roams around the screen, trying to make sense of the page as a whole. It's like looking at the world through a straw.
Although I could write web pages using ed and a few basic html commands, I was still unable to surf the net quickly and efficiently. My text editor allowed me to create a website from scratch, but there was no command line browser to help me read websites that were written by others. The closest approximation was a program called lynx, which does not employ graphical icons, and can even be run without a mouse. Indeed, many blind people still use lynx today. However, lynx remains a screen oriented application, presenting information across 25 rows and 80 columns. I was hoping for a command line interface similar to ed.
In 2001, I began writing a program called edbrowse, a combination editor browser, whose interface was fashioned after ed. This has all the features of ed, along with some new commands, such as `b' to browse an html file, and `g' to go to a hyperlink referenced by that web page. One can "edit" www.ibm.com as easily as one might edit a local file. Of course you cannot meaningfully change the contents of www.ibm.com, since it resides on another computer, but you can format it using the browse command, then step through the text line by line, or search for a word or phrase using the ed commands you already know. To find the next hyperlink, search for the left brace, as this indicates a link to another web page. Similarly, one can step through the fields in a fill-out form by searching for the less than sign. With practice, it is surprisingly easy to navigate through most web pages and find the information you want. Compared to other browsers, edbrowse demands more input, in the form of entered commands, and generates less output; which is precisely the paradigm for a one dimensional channel such as speech or braille.
Another example of web based system administration is samba file sharing, which is accessible through http://localhost:901 on some computers. Turning to network administration, most off-the-shelf routers can now be configured through html. I hope this is the beginning of a new trend in system administration. Accounts and passwords, networking, firewalls, disk utilities, and the task manager are just a few examples of real world applications that can and should be web based. If the resulting web pages are relatively simple in their content and format, computers would become more accessible, almost over night. Most people would access these functions through the default graphical browser that is shipped with the computer, and they wouldn't know the difference. At the same time, I would take advantage of edbrowse, which was written specifically for my needs.
Beyond this, web based administration makes it easy to configure the computer remotely. If the firewall permits, I could access the printers on your box by typing http://yourbox:631 into my browser. There is no need to log in remotely and run edbrowse on your computer, which may not be practical in any case.
This holds true for any program where eye movements (which lie outside the purview of the screen interface) must be converted into commands and responses, entirely new pathways for the linear version of the same application. Think of the application as a conversation between the program and the user. When the user ignores 95% of what the program "says", and selects the relevant 5% by moving his eyes, that conversation must change in fundamental ways to be blind-friendly. Most screen programs implement this type of mega-output conversation; that is in fact the screen paradigm. For this reason, screen programs with high data rates must be redesigned from the ground up to run efficiently in text mode. At the same time, simpler programs can often be restructured to generate html or xml, which gives the user control over his interface through specialized clients.