What the Future Holds for Braille

My Introduction to Braille

At the outset, I must confess that my very presence here today is proof that I am not always very good at predictions. For when I was first introduced to braille almost 30 years ago, I could not help thinking that such an old technology -- invented in the early nineteenth century, after all! -- was surely on its last legs, soon to be supplanted by advances that would give blind people direct access to print. I figured that about 15 years would do it, after which braille would be relegated to museums, to take its place alongside the telegraph and other outmoded means of communication.

There seemed to be, at that time, plenty of evidence to support that view. The very bulk of a braille book, which I calculated to be about 40 times greater than the corresponding print book, seemed to be the very antithesis of the trend towards capturing information in ever-larger amounts on ever-smaller media. The production of such books, which of course usually requires the mechanical deformation of paper, was being carried out by machines that we might charitably call antiques, for they employed methods that had long since been supplanted for other industrial processes. Most tellingly of all, the people who were willing to learn the special skills required for the transcription of braille were growing older and not being replaced. Simple economics were at work: it was increasingly easier for people to create value and thus make a living in other industries, and it was increasingly necessary for them to do so in a competitive economy.

At the same time, alternative methods for information access by blind people were being developed and improved constantly. In particular, various ways of storing or converting information to speech were beginning to go beyond ordinary recordings. There was even research towards direct conversion of print to tactile "images" -- I recall one study that featured projection of the shapes of print letters onto stimulators mounted on one's back, for instance. One device, the Optacon, allowed the image of print characters to be sensed by the fingers, through vibrating pins. Though it could not match braille for many purposes in that form, the Optacon nevertheless found uses in its own right and the principle seemed ripe for further development, perhaps eventually to supplant braille.

Under those conditions, I saw my own work in the field of braille as participation in a kind of technological "holding action" at best. That is, I saw myself as helping to provide some way that braille could continue to be produced, pending the day when all blind people would go about with cameras strapped to their fingers and an array of stimulators taped to their backs. (Incidentally, if that image now seems amusing, it still lives on in a concrete form, though one that is not especially related to blind people. In the streets near one of our major universities, the Massachusetts Institute of Technology, one may encounter a student wearing headgear and other paraphernalia that affords him or her constant two-way communication of audio, sound and text information anywhere in the world -- as well as some limited contact with people nearby. In laboratories at MIT and other places, it is possible to go even further in the creation and communication of tactile and other sensations, quite apart from one's actual surroundings. Such is the interesting, but still very young technology of "virtual reality".)

That first work with braille was on a project called DOTSYS, carried out around 1969-70 by my then employer, MITRE Corporation, and MIT. The project was to serve the schools of the city of Atlanta, where a new experiment was underway to have blind children study in the same classrooms with sighted children. That gave rise to the need for class materials to be put into braille quickly -- overnight, if possible -- which was faster than was possible by the purely manual methods of the time. DOTSYS was designed to determine whether it was even possible to improve the production of braille through automated methods, and if so to develop those methods.

My background was in mathematics and computer programming, which up to that time I had applied mostly to engineering, especially for military needs such as the design of ship's hulls or radar networks. Consequently, my role in DOTSYS was to concentrate on the computer programming, specifically a program to automate transcription from print to braille. The project was a success, creating a pioneering braille translator, DOTSYS III, that is the forerunner of our present Duxbury Translator among others. Although I understood the need and found the work both interesting and rewarding, I still thought of it as a "holding action" of perhaps 15 years' duration, as I have said.

I have since learned better, and at least I can claim that it took me much less than the alloted 15 years to appreciate my error. Today, we still regularly hear that braille is disappearing for one reason or another, sometimes from people who are advocating some supposed replacement, and sometimes from people who genuinely fear that a cherished resource may be declining into scarcity. I no longer agree. Rather, it seems to me that braille is not only holding its own, but is poised for a strong resurgence, and not so much despite our modern technology, but also partly because of that technology, in conjunction with its own inherent characteristics.

Braille's Endurance

Before considering any influence of other technologies, it is well worth reflecting on braille itself as a technology. In particular, we should ask ourselves why it is that it came into widespread use -- despite many rivals -- from Louis Braille's own time, and remains so solidly popular among its users to this day.

In this matter, as in the matter of prophecy, I must admit that you have good reason to doubt that I know what I am talking about, for I am not myself a regular user of braille. What I have learned here, as in so many other things, is from others. Yet the message about braille is so simple and universally consistent that I repeat it with confidence: Braille is reading and writing; all else is something else. If that message needs to be elaborated for those who use print, but who think that audio alone is an adequate way for blind persons to access text, think again: for all the wonderful ways that you may use audio in your own life, do you consider dispensing with the printed word, or for that matter your pencil and notepad? It is the same with braille.

That leads us to its most basic and arguably most important property: its simplicity. Braille is easily and rapidly read by the fingers. In modern terms, we would say that the dots allow the information to flow at a speed that is well-matched to the tactile sense -- much more so than would be the case if print letters, which after all were designed for visual scanning, were simply raised. Braille is also easily and rapidly created by simple means, the slate and stylus, which do for braille what paper and pencil do for print. Of course there are also machines that provide more rapid means of producing braille -- the braille equivalents of typewriters, laser printers, and printing presses.

Amazingly, some people who understand the importance of braille as a reading and writing medium nevertheless miss the importance of the humble slate and stylus in the hierarchy of methods for producing it. In a recent discussion on the Internet, one blind participant reported that she had been criticized for using a slate and stylus instead of other means, such as a portable braille recording device or a braille typewriter, as if there were some inherit merit in using a technologically more advanced device, or demerit in using the simpler one. The other discussion participants were unanimous, and often eloquent, in their praise for the slate and stylus -- but one still wonders, how anyone could have thought otherwise in the first place.

In the dot system that Louis Braille devised, there is thus the elegance of simplicity. There are also other claims to elegance in the ways that the dot patterns are arranged and assigned meaning, which require closer study to appreciate. Specifically, we find that the 63 possible combinations of one or more dots have not been ordered arbitrarily or in purely numeric sequence, as in a modern binary computer code. Rather, they have been arranged into seven "lines", the first four of which contain all 40 cases where both a top-row dot (dot 1 or dot 4) and upper-left dot (dot 1 or dot 2) are raised. This is a prelude to assignments that tend to represent principal information, such as the alphabetic characters, in upper-cell braille while ancillary and connecting information, such as punctuation marks and indicators, are carried in cells having only lower or right-hand dots. As a result of this obviously conscious design, braille provides not only speed in the flow of information but also a discernable pattern and rhythm, which is both aesthetic and useful as an aid to understanding.

Louis Braille might not have thought of himself as a design engineer, but it is clear that, in the braille system, he achieved a design that is elegant, eminently practical and deeply human. It is for that reason, far more than any other technology of the past or foreseeable future, that braille has stood the test of time. We could almost say that braille endures primarily because it is NOT deeply dependent on any high technology.

Technology as Challenge

Nevertheless, it remains true that advances in technology, especially at the seemingly breakneck pace of the present age, pose many challenges for braille and indeed any kind of access to information by blind people. These challenges are due to the sheer volume and speed of information flow on the one hand, and the type of information on the other.

I doubt that it is necessary to convince anyone that the flow of information -- not necessarily truly useful information, of course -- has increased enormously in recent decades. Even the rate of increase has increased, and not by a small amount! All this has taken place while we have deluded ourselves into thinking that "information management", chiefly by the use of computers, would reduce the information volume to just what is needed, when it is needed. For example, I recall the optimistic predictions of the early 1960's, when computers were still relatively new, but already it was being said that they would lead to "paperless offices". In the paperless office, no one would have to deal with thick documents or indeed any documents at all -- everything would be accomplished as minimal interactions with the computer, through screen displays and keyboards. As it has turned out, of course, that would have been a very good time to plant trees and invest in paper companies. The ability of the computer to print on paper, and communications devices such as faxes, have led to paper printouts in volumes never before dreamed possible. Moreover, it has more recently become clear that that deluge of paper was just the beginning, now greatly overshadowed by the "electronic" information stored by the gigabyte on our computer disks and World Wide Web pages, and arriving as a constant rain of email from the Internet.

All this has overwhelmed traditional means of producing paper braille. No manual transcription process can possibly keep up.

Another, even more difficult challenge is attributable to the type of information that must now be accommodated. No longer is it the norm for print information to come in the form of simple text, for which our standard braille codes were devised. Rather, that text is now more likely to be liberally interspersed with complex technical notation, such as mathematics or fragments of computer programs, which must be transcribed in a special way, according to a separate braille code designed for the particular type of notation. Finally, we are also more likely now to encounter diagrams, pictures, icons and similar visual elements in the material that must be transcribed, which requires even more difficult specialized attention.

In the meantime, the blind worker in an office, or student in a university, has the same need as his sighted colleague to assimilate and respond rapidly to the overall flow of information.

Technology as Ally -- Three Key Technologies

Lest we despair at such a gloomy picture, let us reflect that certain technologies, including some of the very same ones that have contributed to the problem for braille, have also provided at least part of a solution. Three technologies in particular may be regarded as natural allies of braille: computer software and hardware, communications methods such as the Internet, and braille embossing and display devices.

Computer Software and Hardware

Because computer software is the focus of my own work for braille, I naturally tend to consider it first, and I like to think that computerized transcription software in particular has made a positive difference. With a computer program doing the routine but high-volume transcription work, scarce human resources are freed to concentrate on the more complex material. The result is a great increase in the overall availability of braille.

Software to enable access to computers, usually through speech, has also been advancing steadily. Essentially the same software can be made to drive a braille display device.

Other advances in computer software, though not directly related to braille, nevertheless benefit braille and access technology indirectly. Notable among these are the latest versions of so- called "graphical user interface" systems, such as Microsoft Windows. At last, these systems are beginning to live up to their potential as truly accessible, despite the fact that their computer screens still seem cluttered with "icons" and other such visually-oriented images, as contrasted with the easily converted text of earlier systems. For reasons that in the first instance have little to do with access, such systems are basically large menu systems, with an underlying structure that is actually more rigidly organized and orderly -- and hence potentially more accessible -- than the older text-oriented systems. The visual effects on screen are relatively unimportant, as long as the access software can have access to the underlying structure. That is increasingly possible, to the benefit of all kinds of computer access.

Computer hardware advances can also take much of the credit for an increase in the quantity and quality of braille. When the first computerized translators appeared in the 1960s and 1970s, the capacity and speed of the available hardware made it necessary to carry out the transcription process in several steps, which was time-consuming, particularly because it would often be necessary to repeat the steps, perhaps several times, in order to correct errors. With modern computer hardware, it is possible for braille translators to function as true word processors, so that for example reformatting (that is, the arrangement of the braille text on the page) takes place onscreen immediately upon entry of corrections to the formatting information.

Lastly, scanners -- combinations of hardware and software that allow automatic conversion of printed text to computer files --have for quite obvious reasons contributed significantly to the quantity of text that is available for automatic translation to braille.

Communications

Improvements in communications, now most notably the Internet, also of course contribute to the quantity of available text. But beyond the matter of quantity, and I think ultimately more importantly, the Internet -- or rather a part of the Internet, the World Wide Web -- is setting a standard for the representation of documents in a way that is highly beneficial for the quality of conversion to braille. That is because web documents are coded according to the Standard Generalized Markup Language, or SGML. SGML is a method for representing information that has been around since the 1960s but that has until now been used only for special purposes. The particular form of SGML that is used on the web is called HyperText Markup Language, or HTML, but the confusing acronyms are beside the point. The important issue is that documents in that form are not merely finished text and images, but rather contain definitive information as to which parts of the text are headings, subheadings, footnotes, author references, and so on. In other words, the document's inherent structure, as well as its finished form, are fully represented in a way that can be put to use by computer programs that transcribe to braille. This is a very important advance, because in cases where only finished documents are available for transcription, it generally takes human judgment to discern the document's structure reliably -- for example, to decide whether an isolated line is a heading or perhaps a quoted line of poetry. With SGML coding, programs can also now make those distinctions instantly and correctly, from codes that directly indicate the author's intentions and that are attached to the affected text even though they are not visible in the finished form of the document.

It is notable, and we might say characteristic of the way that the world works, that the adoption of the SGML standard for web documents was mostly motivated by matters quite apart from braille or other forms of access by persons with disabilities. Primarily, it is because "browsers" -- programs that allow web users to move easily from document to document on the web -- need to distinguish the information that is to be displayed directly on the screen from that which informs the browser itself how to find other documents, and in general need to know what "meaning" to attach to various parts of the document. These other motivations in no way diminish the great significance of SGML coding as a benefit to braille. In fact, because of them, there is very good reason to project a continuation of a trend now happily well underway -- more and more documents coded in SGML, and consequently more and more general-market programs, including major word-processing programs such as Word and WordPerfect, that are able to work directly with SGML or that have at least announced plans to do so. In short, SGML is providing a standard that at last the world is coming to, if only for its own reasons. It is also a standard long advocated by many of us interested in braille, one that is already yielding greater immediacy, quality and universality in documents converted to braille. As the standard itself is improved, and as it is even more widely observed -- areas where we must acknowledge there is still much to be done -- the potential for braille is truly unbounded.

Braille Embossers and Displays

Turning to the output mechanisms for braille, we must first acknowledge the great strides that have been made towards production in the traditional medium, namely paper. The very first computer-driven embossers appeared in the late 1960s and early 1970s -- one of them, in fact, was developed by MIT as part of the DOTSYS project. Marvels of their time, they nevertheless worked relatively slowly, were limited to the standard six-dot braille cell and a single side of the paper, and required frequent maintenance. Today, the well-engineered descendants of those early machines regularly put out paper braille at quite high speeds, embossing on both sides of the paper, with capabilities for 8-dot braille and in some cases graphic images as options. There are also machines capable of embossing plates, preparatory to press braille production.

Even more varied are the methods that have been developed for producing braille to be used on signs to be affixed to buildings, vending machines, and so on. Today it is routine to create a sign that contains not only braille but also large print text and graphic images, all of them raised, using computer software that shows the complete working image on screen while it is being created. When the designer is satisfied, the image can be sent through any of several kinds of processes -- such as routing, engraving, or photo-etching -- that will result in the finished sign. You may have seen some of them near elevators or in other public places. Our only trouble is that sometimes we fail to educate the sign-makers properly, and the braille is put on upside down! It is at such times that we are grateful for the experience and patience of braille users, who after all are accustomed to thinking of braille backwards, when using the slate and stylus. We are also most grateful for their sense of humor!

Methods for creating raised graphics on paper also deserve mention here. For example, there are now computer programs that allow blind users to compose and edit tactile images, and even combinations of special papers and pens that permit blind persons to create tactile images by direct drawing. While we may expect braille itself to remain in use for the text, there are clearly a great many potential uses for augmenting the text with tactile graphics. Explanatory diagrams and images, used for the same purposes and in much the same ways as they are in print, are one obvious example. Another would be to use specific tactile symbols, perhaps shaped like the corresponding print symbols or perhaps designed specifically for touch, to supplement the standard braille configurations. The Dots Plus project, at Oregon State University, is already exploring that concept.

But while devices for preparation of paper and other fixed forms of braille are certainly important and will remain so, I believe that there is an even more significant future for electronically- driven devices that can show arbitrary braille text, and perhaps one day even tactile images, that may vary from one moment to the next. Such devices are already in use for 6-and 8-dot braille, and are far from a new idea; once again the earliest developments date back to the 1960s. However, the goal of a braille display that allows rapid and reliable switching, and that is also inexpensive, has so far proved elusive. The engineering issues are obviously quite difficult, so that progress has been slow and perhaps that will remain the case for some time. Nevertheless, if we have the resolve to keep working on the problem, we can certainly anticipate that one day there will be portable, affordable, full-page braille displays --braille "screens" we may call them -- that can be attached to portable computers for instant access to web and other documents in braille anytime, anyplace.

Two Consequences For Braille

And so we have explored three technologies: computer hardware and software, communications methods such as the Internet, and braille embossing and display devices. We have noted a very desirable consequence of these technologies, namely that braille is already more quickly and easily produced, from a wider selection of sources, than ever before -- and have projected that the trend can only be expected to improve. In short, braille is more available, and can be expected to become more so. With availability, it is only reasonable to expect that there will also be increased interest in learning to use braille. I cannot cite hard statistics to prove that this is already happening; perhaps it is still too soon for the effect to have shown up in such statistics, in any case. But I have sensed a lot more excitement about braille, both on the teaching and learning side, even though education as such is not my particular specialty. And it makes perfect sense, because surely availability of braille is ultimately the key to increased usage -- not so much the other way around, even though obviously increased usage also leads to greater availability. That is because few people are interested in learning to read material that is seldom encountered in practice; they have better things to do with their time. But if braille is everywhere, you can bet that there will be real interest in making use of it; no further attention to "motivation" will be necessary.

Besides availability, there is a second and less obvious consequence that these developments will have on braille, and that is a push towards unification of the braille codes. To understand what unification means and the reasons for it, it is again useful to look a little further back in time, to the period -- typically several decades ago -- when most of them were designed in their present form. Of course I am not talking about the original, very solid basic design of Louis Braille himself, which dates from over 150 years ago and yet still forms the core of all braille. Rather, I am speaking of modifications and additions that have been made since that time, to meet various real and perceived needs.

At the time we are speaking about, computers had not arrived on the scene and all braille was transcribed by human labor. Moreover, the material itself was likely to be fairly easily divided into categories -- ordinary literature vs. mathematics and science, for example. Finally and perhaps most significantly, the reader was assumed to be working or studying in an environment isolated from the world of print, and therefore to have little or no interest in the print representation as such. What mattered, therefore, was that the meaning rather than the form of the material was to be conveyed in the braille.

Those conditions and assumptions, and the perfectly natural decisions that flowed out of them, have profoundly influenced our present braille codes. The most obvious effect is that, even for a given natural language, there are usually several codes, according to subject matter. For example, in American English braille, we use one code for material that is "literary" in nature, and a different code for material that contains mathematics. Since the advent of the computer, we have taken this principle even further and now have a third code for representation of computer notation, such as the text of computer programs. (To make matters even worse, there are also differences between the British and American codes for the same technical subjects! But that is for other reasons, not relevant to my point here.)

Naturally, the existence of different codes means that transcribers must make a judgment as to which code should be used in a given instance. Such judgments are sometimes troublesome even for human transcribers, but let us assume for a moment that they can be made correctly. Even so, we would find that almost none of those specific codes attempt to represent the print notation precisely, but rather are designed to carry over the basic meaning, relying on conventions that are typical of the subject matter.

Many cases of such imprecision do not even involve special notation, but apply to very ordinary text encountered in everyday life. For example, until recently it was normal in English braille to put certain units, such as the abbreviations for feet and pounds and the symbol for percent, in front of the associated number in braille, even though the number normally comes first in print. Evidently some code designers felt, and for all I know their feelings may have been well founded in psychological principles, that it was important to present the unit being quantified first, and then the quantity. Or perhaps the motive was simply to make braille uniform in its treatment of all such quantity expressions, where print is not uniform (because some units, such as the dollar sign, precede the number). Even in present English braille, we see further examples of this tendency to make braille uniform where print is not -- for example, the actual punctuation marks in dates, whether they be hyphens, slashes or periods in print, are supposed to be written as hyphens in braille. Note that, among other issues, this requires the transcriber to judge whether a particular series of numbers and punctuation marks is a "date" or something else having a similar appearance.

Still other cases of imprecision are implied in the contraction systems, also called grade 2 braille, used in many languages including English. For example, there is no definitive difference in current English braille between the slash (sometimes called oblique stroke) and the contraction for the letter-group "st". The reader must determine the meaning from context, which is usually possible.

It may also be useful to consider a simple example in the context of technical notation. In classical mathematics, it usually does not matter, from the standpoint of meaning, whether the "plus sign" (+) is spaced or unspaced from surrounding expressions. Accordingly, both the American and British math codes specify how the sign is to spaced in braille, regardless of the way it appears in print. (They each do it differently, of course.) The result still conveys all the meaning as long as the material actually is classical mathematics, but not if some other kind of notation is involved, where the spacing may be significant. Moreover, while the reader is fully informed as to the mathematical meaning, he or she is prevented from knowing the actual form of the print, just as in the preceding examples involving dates and percentages and so on.

Let me repeat that the decisions of earlier times were reasonable for the then-prevailing conditions and assumptions. My point is not to find fault with anyone, but simply to point out that the conditions of the present time are quite different, and so corresponding changes in the braille codes are inevitable. Interestingly enough, there are many ways in which, in my view, the changes will constitute something of a return to Louis Braille's original straightforward design, which was based on a direct representation of the print symbols regardless of "meaning". For as the history books tell us, Louis Braille actually set aside a complex sound-based system, that is Charles Barbier's "Sonography", in favor of the simpler spelling-based system that comes down to us today. His example remains instructive for our time.

In any event, transcription today is at least as likely to be carried out by a computer program as by a human, as we have seen, and for future benefits it is certainly desirable that the trend in that direction continue. (Incidentally, let me note here that, with increased usage of braille of all kinds, there is still plenty for professional human transcribers to do -- no need to fear unemployment!) Furthermore, the documents of today are much more difficult to put into neat categories -- special notations of all kinds can appear in any kind of document. Finally, blind people today are actively involved in every aspect of employment and education, and so frequently need to deal with the details of print representation -- sometimes because precise meaning actually involves such details, and sometimes just to enable communication in customary fashion with users of print.

Are those reasons really sufficient for the average user to worry about precision? Most braille is perfectly understandable from context, it is true. Yet one blind friend, a man who had been very well educated and who possessed an enviable command of the language, once mentioned to me that he was well into his twenties before he realized that the percent sign normally came after the number in print. Who knows how often his print correspondence, probably flawlessly punctuated in other respects, contained what print readers would consider a puzzling and distracting error? And while errors of that kind may still seem less than critical, the same may not be true of those involving technical notation. Certain kinds of computer programs can very well fail, or behave in completely unintended ways, over something as small as a missing space.

Consequently, these changes in the nature of literature, and in the conditions under which braille is used, provide good reason to change the braille code in a direction that we have come to call unification. Unification implies preciseness in representing symbols as well as universality.

By preciseness in representing symbols, we mean that braille is parallel to print, and equally as precise, in representing symbols -- such as letters, punctuation marks, etc. -- that have significance. This is not the same thing as being rigidly bound to the print, so that even obviously ornamental aspects such as fonts are always represented. The meaning behind the symbols remains the braille reader's responsibility and privilege, just as is true for the print reader. This characteristic has two beneficial effects: first, that the braille reader is better informed as to the details of the print -- all those details that normally matter, by definition -- and second, that the element of human judgment is largely made unnecessary for correct transcription. In that way computer programs, which can accurately process symbols but generally lack the human ability to know their meaning, can be more widely and effectively applied to transcription.

Universality is the ability of a braille code to represent wide subject areas without resort to separate codes. This makes it unnecessary for the transcriber, whether human or computer, to judge which code to use. This again makes it easier for computers, for the same reasons as already mentioned. But there is also a considerably more important benefit of universality: users of braille need not undertake substantial new learning, that is the acquisition of a whole new code, when venturing into new subject areas. The only new learning is the same as for a user of print, namely just to learn the new symbols and meanings involved. Knowledge already acquired remains valid, and is simply built upon.

The International Council on English Braille, which seeks to coordinate the activities of the various national Braille Authorities, is presently engaged in a project to design and evaluate a "Unified Braille Code", or UBC, for English. We do not imagine that it is practical to bring together all the braille codes for different languages. Nevertheless we have been gratified that there are efforts by designers for other languages to work towards unification within those languages, and even to cooperate with our project, with a view to keeping at least the technical portions consistent among the various languages, just as is the case with print. The UBC project has been underway since 1992, and those of us involved in it remain optimistic at its eventual adoption for all English speakers, replacing some five separate codes -- actually more, counting local variants --that are now in use. Incidentally, UBC preserves just about all of current English literary braille, so that persons already familiar with that system will scarcely notice the changes, and existing English braille literature will continue to be useful.

This move towards unification is not without controversy, and some costs that must be acknowledged. One natural consequence of universality is that a universal code is usually less efficient, in a given subject area, than a code that is designed specifically for that area alone. That is to say, a book in UBC may be a little bit longer -- not that much, according to our studies -- than the same book in one of the traditional special- purpose codes. This effect must certainly be minimized, as well as weighed against the benefits of UBC. We also recognize that there are likely to be highly specialized situations and subject areas, such as chess notation for persons highly involved with that game, where small separate codes may be well justified. 8-dot codes are also likely to remain useful for certain special purposes, such as for direct one-for-one representation of certain standard computer codes. But those same kinds of exceptions apply to print, and do not negate the great benefits of unification.

A Single Bright -- And Busy -- Future

We have projected a future for braille that is encouraging on many fronts, both for the system itself, and in benefits for its users. I could sum up by saying that the future is bright, but I think that a better word might be ... busy. For those of us involved with braille must remain busy with the work of carrying forward the technologies that will help to bring braille into the coming millennium, busy also with the task of unifying the braille codes so that they will work better for present and future user's needs, and of course busy with teaching, promoting and just plain making braille. At the same time, no doubt the users of braille will be equally busy, not so much with braille as a subject in itself, but in a more important way: with braille as a means for simply living one's life in an information age.


Copyright Duxbury Systems, Inc. Saturday, April 25, 2020

Duxbury Systems, Inc. website