Unifying Braille: Louis Braille's Vision in Tomorrow's World

by Joseph E. Sullivan
President, Duxbury Systems, Inc.
Westford, Massachusetts, U.S.A.
Email: joe@duxsys.com

Chair of Committee 2 of the Unified English Braille Research Project under auspices of the International Council on English Braille

January 2009

Paper prepared for presentation at the Celebration of the Bicentenary of Louis Braille's birth at UNESCO Headquarters, Paris, organized by Institut National des Jeunes Aveugles and l'Association Valentin Haüy. The kind invitation and sponsorship of this paper by the organizers is gratefully acknowledged.

We are here to honor a man, Louis Braille, whose invention has been a vital force for the removal of barriers, and hence for unification. In this paper, I hope to make a case for the greater unification of braille itself—not only as an exercise that is possible, as in unified codes such as Unified English Braille (UEB) that have already been implemented, but also as a development that is highly desirable and consistent with Louis Braille’s original vision. Indeed, often the best way to achieve unification seems to be to go back to Louis Braille’s work and learn as much from it as possible.

On one level, we think of braille as removing the barrier created by an inability to see print—and certainly it is that. Today, braille has become the primary means of true literacy for blind people everywhere, and for virtually every written language throughout the world. And when you remove barriers to communication, you remove barriers between people. Thanks to braille, blind people have a means to access and create every kind of knowledge and are thereby able to participate fully in society. Bringing people together is therefore the essence of what braille is about, or in other words a central goal of braille is to unify.

But is braille itself unified? And as a practical matter, can it be? We need to acknowledge that inevitably there will be some differences between braille systems designed for different natural languages, mainly because of the obvious fact that, with only sixty-three distinct dot patterns, it is not possible to represent all the letters and other components of the world’s many print writing systems in one system with single-cell assignments all around. So we naturally have different basic codes for Russian and French, for example, and still another for Greek, and so on. Even languages based upon the same alphabet, such as French and English, have other aspects, such as the frequency of accents, that make consistency between codes difficult, at least, to reconcile with the need for practical efficiency in general reading. However, even allowing for these unavoidable differences, when actual braille systems in use are considered, a great deal of needless diversity is evident.

One type of diversity, and one that is particularly hard to understand, arises when people who speak the same language, but who happen to live in different countries, use different braille signs for the same print sign. An instance of this phenomenon is that the ordinary plus sign, even when used in a nontechnical context, would be transcribed into braille quite differently in England and North America.

A second kind of diversity can and usually does arise when specialty codes, designed for efficient treatment of technical subjects such as mathematics or computer programs, must be used to express notation that is commonly encountered within general literature. It is not that having specialty codes is a problem in itself—on the contrary, they are quite useful for specialists, a point to which we will return. But when the general literary code makes no provision for handling even simple mathematics or computer notation, you can have several different braille signs being used for the same print sign within the same text. Currently in North America, for instance, there are three ways to express the dollar sign, depending on which code is considered applicable to the current context.

A third kind of diversity arises when technical notation that is common throughout the print world—that is, notation that is independent of language—must nevertheless be represented in braille in a language-specific way, even when a specialist code is used. A glaring example of this is mathematics. In print, mathematical notation is international. A sighted American mathematician, even if he does not understand Russian, can read and understand the mathematical notation as written by a sighted Russian mathematician. The same would not be true for their blind colleagues, because the Russian and American braille codes for mathematics are quite different.

All three kinds of diversity raise obvious barriers to communication—barriers that Louis Braille sought to remove. Surely he would want us to do better!

These three kinds of diversity give rise to three opportunities to realize Louis Braille's vision more fully: in unification of codes between countries speaking the same language, in the enabling of "literary" codes to express technical notation—which we could call "vertical" unification, and finally in the unification of specialist codes across languages.

My own involvement with unification began in early 1991, when I was asked to participate in the development of what has come to be called Unified English Braille, or UEB. The goal of that project, which started in North America, was vertical unification—to provide a way to express technical notation such as math and email addresses, which increasingly are found in general literature, in a way that was consistent with the general braille code and that did not require "switching" to separate specialist codes. By removing the need to learn a specialist braille code for even the most basic appreciation of such technical subjects, UEB sought to remove a barrier to learning. The general braille code for English would no longer be "in a box"—capable of expressing only "literary" text according to a narrow and outdated concept of what "literary" could be. Instead, UEB would be an extensible writing system fully parallel and equal to print, just as capable of expressing any symbol in any sequence, with clarity.

Clarity, we felt, required that the braille needed to be unambiguous. That meant, for example, that the traditional English Braille sign for the slash, which could also stand for the contraction "st", had to be changed (unless the contraction itself was abolished, which in this case was judged to be the less desirable change). It is not that the ambiguity was very often a problem for readers in conventional literary material, where the surrounding context would usually make the intended meaning obvious. But in a world of increasingly creative commercial names and other new styles of print usage, together with increasingly technical content, misreadings would be more and more likely if the ambiguity were not removed from the braille code. A secondary reason for removing ambiguity is the increasing importance of automated braille-to-print conversion. A blind student who prepares a paper for a sighted teacher needs to know that the paper will be free of mistakes introduced by the braille note-taker when it converts the paper to print—an increasingly common scenario. If there are ambiguities in the braille code, computer software will make such mistakes, including some that a human transcriber would not make—and it will make them much faster!

Extensibility is another key concept in UEB. The world’s writing systems, along with language generally, never stand still. So, we felt, it should be with braille. When new symbols come to be defined, or older symbols used in new ways and contexts, braille should be able to express those symbols naturally and with clarity, without any need for "transcriber's notes". In fact, if a blind mathematician were to need to define a new symbol, there is no reason that that symbol should not begin its life as a braille symbol, leaving the sighted to devise a counterpart in print!

Our basic design committee presented its first report, which we naively hoped would be its last, in November 1992. In the deliberations that led up to that report, we had gained some basic insights, which have stuck with us to this day. Among those insights were three I will mention here:

First, there is a fundamental need to ensure that the extent of any one braille symbol, whether it be a single-cell or a multi-cell symbol, can be readily determined, even if the meaning of the symbol is not immediately known. Otherwise, the reader who encounters an unfamiliar symbol cannot be sure what to look up—and ambiguities are bound to arise when, say, a certain sequence of single-cell signs could also be interpreted as a multi-cell sign. This need is realized in UEB by symbol formation rules based upon a "prefix-root" structure that flows naturally from the principles evident in Louis Braille's original design. [1] This structure is one of the cornerstones of UEB that gives it extensibility as well as clarity.

Second, the assignment of symbols for the digits, and the general rules surrounding numbers, must be decided early in the process because everything else is affected by those decisions.

Third, the use of contractions in literary braille, which typically takes up all the single-cell symbols that are available after the alphabet, digits and basic punctuation are assigned, makes it hugely complicated to add capabilities for mathematics and science in a way that is both unambiguous and efficient.

In summary, we learned that unification is technically difficult, and that Louis Braille’s basic design principles and original decisions most often provided the best guidance to a way forward.

That 1992 report received a very mixed reception. Some people accepted its premises and liked its recommendations while others rejected both. Another lesson emerged: that unification is not only technically difficult, but politically difficult as well.

But at least the effort caught the attention of the English-speaking world beyond North America. In July 1993, at a meeting in Australia, the International Council on English Braille, which had been formed two years earlier in Canada, joined the UEB development effort as its first major project. Now UEB was attempting not only vertical unification within North America, but also to do the same for all English speakers in other regions— in other words, to carry out country-to-country unification as well. A particular challenge to this effort was that, while the diversity between English literary codes for different regions was relatively small, the diversity between the North American and British math codes was huge—practically every sign, right down to the digits, was different. The same was true of the two computer notation codes.

Naturally, the UEB committees were expanded to include representatives from the other ICEB countries. At this point, a principle was established that had been defacto realized in the earlier work and that had come to be recognized as important— namely, that all committees, from the working technical committees on up to the ICEB’s General Assembly delegations, must include a majority of blind braille users. This common-sense principle assures that decisions are controlled by people who understand the problems most intimately and who are most affected by those decisions. Almost as importantly, such a principle is of some help in assuring braille readers generally that the code isn’t just another "bright idea" brought on by a committee dominated by sighted users of print. In other words, it goes a little way towards addressing the political difficulties that naturally go with change.

The development phase of the UEB project came to a close in April 2004, when the ICEB General Assembly declared UEB ready for use and recommended that the various national authorities consider its adoption. Four of the seven ICEB countries, namely Australia, New Zealand, Nigeria and South Africa, have since officially adopted UEB and are working towards its implementation. The other three, namely Canada, the United Kingdom and the United States, are in various stages of decision-making at this writing (December 2008).

The details of the UEB development, including historical and current reports, can be found on the ICEB Web site [2]. A research report on UEB appeared in the April, 2005 issue of the Journal of Visual Impairment and Blindness [3].

In the meantime, committees working with other languages and pursuing similar goals have produced unified braille codes. These include Spanish [4], French [5] and Japanese [6].

While UEB and similar efforts for other languages aim to bring about country-to-country and vertical unification, the third kind, namely the unification of specialist codes, generally remains unrealized, with one notable and happy exception—one for which, as usual, we have Louis Braille to thank. That exception is of course music, where an accepted international braille code—based upon the music code that Louis Braille himself designed—is used by blind musicians all around the world, no matter what language they may speak. But for other special notations, most notably mathematics but also for other scientific disciplines, international specialist codes are generally lacking.

It may at first seem contradictory to advocate the development of general codes, such as UEB, that incorporate not only a natural language but also mathematics and science notation on the one hand, and at the same time advocate the development of a separate code for mathematics that can cut across language boundaries. However, as we have already observed, braille codes for different languages are mostly different in fact, and such differences are necessary for braille to remain a practical writing and reading system. It shouldn’t surprise us if the technical provisions of a vertically unified code that is tied to a particular language, such as UEB, reflect the characteristics of that language to a certain extent and so differ from the corresponding provisions in some other unified code. This is largely because it is necessary to design first for the needs of the natural language, and even to optimize for those needs, because that is what most braille users—even those who work in technical fields—are reading and writing most of the time. Once those needs are met, the technical needs must be accommodated within the possibilities that are left, and those will vary from one language to the next. As an example, the standard Vietnamese alphabet has two forms of the letter d, one a "plain" d as in most Western alphabets and the other with a bar across the top in print; there is also no letter z. In the Vietnamese literary braille code, the braille assignment for the barred d has the same dot configuration as that used in most Western braille codes for the regular (unbarred) d, while the assignment for the unbarred d corresponds to the configuration used in Western codes for the z. A unified code for Vietnamese would clearly need to take these special characteristics of the base code into account when extending it to include mathematics or computer notation.

A related consequence of the need to optimize for the natural language is that the treatment of technical notation cannot be fully optimal. This is just common sense: when you design a code for a specific purpose as a first priority, it will consequently be better for that purpose than one designed primarily for some larger, or different, purpose. For example, the North American specialty code for computer notation, CBC, provides a single-cell symbol for the commercial at-sign. That makes sense because that is a fairly common symbol in computer notation, such as in email addresses. But in general literature, for which UEB is designed, the commercial at-sign is far too rare to permit such an assignment without severely impacting the choices available for other symbols. As a result of this and similar design considerations affecting both codes, CBC is typically more efficient at representing computer programs than UEB. On the other hand, UEB can represent not only computer programs but other mathematical and scientific notations as well as typical English text, whereas CBC can only represent computer notation—you must "switch out" of CBC to represent anything else.

Consequently, a common code designed specifically for computer programming would not only give programmers who speak different languages a way to read each other’s programs but also a way to work more efficiently in their occupational specialty or area of advanced study. Similar benefits would apply in mathematics and other technical disciplines. This is certainly useful and desirable. But arguably, this is not as important as unifying the general, language-based codes such as UEB—because that kind of unification should lead more people to a better understanding and appreciation of math and science notation in the first place. In other words, we can hope that UEB and codes like it would not only provide an adequate tool for most people when it comes to everyday technical notation but would also give rise to more people who would demand a more efficient code for their own specialty. Such a development would mean success for the unification movement—even if, ironically, it meant the birth of another code!

As I have mentioned briefly, one of the significant challenges to unification is the politics surrounding any changes to the basic literary code—braille as people read and write it every day. Changes in strictly technical codes are much easier, relatively speaking—people are willing to leave those to the specialists involved, as long as their own braille isn’t affected. But, if UEB is typical, the kind of unification that allows the general code to embrace technical notation is likely to lead to some changes in basic braille that, although minor, are noticeable enough and likely to be rejected by many braille users unless carefully justified during a period of preparation, which may take some time. This isn’t just obstructionism; people generally, whether blind or not, naturally resist change unless they see a real need and believe the proposed changes are well designed to fill that need. In places where strenuous efforts have been made to conduct workshops and otherwise educate braille users as to the need and eventual value of UEB, it has been well accepted; in other places, reaction has been much more negative. I don’t think we need to lose confidence that desirable changes will come about eventually, but perhaps we need to be patient as well as diligent as we work to bring it about.

After all, though Louis Braille must have been encouraged to see that his system worked well for himself and others close to him, he never got to experience the universal acceptance that it enjoys today. He never gave up, though, but worked tirelessly towards improvements that would further the usefulness of braille and its implicit goal of bringing people together. We now know that the foundation he created is a masterwork of human engineering. Let us, the beneficiaries and stewards of that work, muster the will, skill and patience to carry it forward into tomorrow’s world.

Further reading and listening:

"Research Report: Selected Findings from the First International Evaluation of the Proposed Unified English Braille Code," by Darleen Bogart and Alan Koenig, Journal of Visual Impairment & Blindness, April 2005.

"Unified English Braille: A Literacy Bedrock in the Digital Age," paper and speech by William Jolley, Public Relations Officer, International Council on English Braille (ICEB), given at the Twelfth ICEVI World Conference of the International Council for Education of People with Visual Impairment, Kuala Lumpur, July 2006; online at: www.iceb.org/ICEVI2006_UEB_Paper_Jolley.htm

"Braille is Dead," speech by Peter Osborne, Chief Braille Officer of the Royal National Institute of Blind People, given at CNIB’s 2007 Braille Conference in Toronto, Canada

Notes and references:

[1] See "Historical Analysis and Critical Evaluation of Braille" by Pamela Lorimer, 1996, accessible online at: www.braille.org/papers/lorimer/cntnts.html See chapter 2, section 5, where the classic seven-line organization of the various braille cells, and the thinking behind it, are described. In that design, all the signs used for primary symbols— that is, all those used for letters and numbers—have at least one dot in one of the two upper left positions and at least one (possibly the same) dot in the top row. Signs without that property are used for ancillary symbols such as the punctuation marks and the numeric prefix, which by their nature or definition are typically adjacent to primary signs. This "upper-left" bias keeps the dot positions clear while reading, and as we know from the historical research cited above, Louis Braille understood and intended this characteristic. The use of lower and especially right-hand signs for other needed prefixes, such as for capitals, is a natural consequence that is reflected, at least informally, in many of the world’s braille systems.

[2] www.iceb.org -- follow the links to "UEB Project Information". In particular, the basic design of UEB, from the reader's perspective, is in "The Reader Rules - The January 2004 Report of the Objective II Committee with corrections and amendments through February 15, 2004 at www.iceb.org/c2r0401.html

[3] See "Research Report ..." under "Further Reading ..." above.

[4] "Código matemático unificado para la lengua castellana - aprobado por las imprentas Braille de habla hispana," Montevideo, June 1987.

[5] "Code de transcription en braille des textes imprimés - Réalisé dans le cadre de l'Accord de Coopération pour une uniformisation du braille français," December 2005; and "Notation Mathématique Braille - Document réalisé par la Commission pour l'Évolution du Braille Français," January 2007; both published by Institut National des Jeunes Aveugles and l'Association Valentin Haüy, Paris.

[6] "Development of the second edition of Unified Japanese Braille Code: Assignment braille characters to the Unicode" by Mamoru Fujiyoshi, Toro Ishida, Haruhiko Sawazaki and Nobuyuki Ohtake, cited at: ci.nii.ac.jp/naid/110003298105

Copyright Duxbury Systems, Inc. Friday, July 28, 2023

Duxbury Systems, Inc. website