A Perspective on Braille Unification

by Joseph E. Sullivan

Since 1992, or over five years ago as I write this, there has been a project underway to research and develop a Unified Braille Code (UBC) for English speakers. Initiated by the Braille Authority of North America (BANA), the UBC Research Project originally centered about the concept of a single braille code for literary, mathematical and computer-related notation, replacing the three distinct codes now defined by BANA for those purposes. The Project was later adopted by the International Council on English Braille (ICEB), at which point "unification" took on the added meaning of bringing together not only BANA's codes but also the very different technical codes that are used in the United Kingdom and many other English-speaking areas. In all, if UBC's goals are realized, some five major braille codes stand to be unified -- actually more than that, if various local variations and extensions of those five are counted separately.

I have been an enthusiastic participant in the UBC project since its beginning, and remain so. From that experience, I have drawn certain conclusions that I think may apply to braille unification studies generally, and which I will present here. The reader should of course regard these conclusions as my own opinions, not necessarily shared by all my fellow laborers for the UBC cause.

Having given that customary disclaimer, I will venture an observation that I very much doubt will raise any disagreement, namely that any proposal for substantial braille unification, no matter how carefully drafted and no matter how deeply appealing it may be to some people, will at the same time be thoroughly abhorrent to other people. Of course this is partly due to the natural resistance we humans all have to any kind of change, but that is not my main point. I believe that it is also a direct consequence of the fact that no braille code -- or any kind of code, for that matter -- can be equally good for all the various purposes that may be envisioned for it, and different people typically have different kinds and levels of interest in those various purposes. For example, a computer programmer will naturally be concerned about the efficiency and clarity with which typical program source text is transcribed, whereas a history teacher may care little about computer programs but will want to be sure that ordinary prose is simple as well as clear. While it is possible to satisfy both needs up to a point, it is not possible to optimize either one without detriment to the other. In that simple fact lie the seeds of dissatisfaction -- especially for those people already accustomed to the efficiencies offered by the current technical codes, which were consciously designed for their respective special purposes.

Given this reality, we might first ask: Why pursue braille unification at all? The main reasons are given in the paper presented to BANA by Drs. Cranmer and Nemeth [Cranmer & Nemeth 1991], which was the catalyst for BANA's original launching of the UBC project. In it, the authors first note that the conditions under which the current braille codes were designed have changed. Blind people no longer study and work in relatively isolated spheres but rather in the mainstream, constantly sharing interests and communications with their sighted colleagues. And at the same time, in literature, various types of technical notation are increasingly likely to be found mixed in with other types and in general prose. From these observations, the authors go on to argue convincingly that using separate codes for literary and technical purposes causes undue difficulty: first, in learning braille; secondly, in reading or writing with precision; and thirdly, in the economics of conversion between print and braille. They use the example of the dollar sign ($), which has a different representation in each of the three BANA codes, to illustrate all those points. I particularly like that example, because it is a reminder that the production of braille under a multiple-code system requires the making of fine distinctions -- such as between dollar signs that are "literary" vs. those that are "computer notation". Such distinctions can be difficult for human transcribers and even more difficult for computer programs that are used for automatic transcription. That makes transcription cost more $$$ -- no matter how you write the dollar signs! In the context of limited budgets for braille production, that is to say in the real world, that means there is simply less braille produced.

So the motivation for UBC was easily established, and broadly accepted. From there, the overall project goals could be enumerated: (1) UBC is to be based on the traditional 6-dot cell. (2) UBC is to encompass literary notation, and to retain grade 1 and grade 2 English braille as it is already defined, with no major changes. (3) UBC is also to encompass the notation for mathematics, computer programming, and related scientific and engineering disciplines in a single coherent and extensible system. Symbols learned at earlier stages remain the same even in advanced technical text, so that one need learn only specifically new symbols and meanings in the same way as the print reader, not a whole new code. (4) While UBC is envisioned as supplanting only English codes (except for Music Braille, which is not affected), the design process is to consider all currently used braille codes, so as to avoid any proliferation of unnecessary differences. (5) While remaining "readable", UBC is to convey symbols unambiguously, without reliance on "meaning", thereby enabling precise understanding and communication and also simplifying automated conversion in either direction. (6) UBC is to be usable by both beginners and advanced users.

These goals, which I have slightly re-stated and re-ordered, have generally been seen as being derived from the overall concept of unification along with a common-sense desire not to discard what is good from the current system -- including literary works already in braille, and the hard-won skills of current braille readers. As such, these goals are broad enough to be generally regarded as desirable, and so have not been very controversial. But as a committee has worked towards those goals, following standard debating and voting procedures, the resulting concrete preliminary proposal [ICEB 95] has indeed sparked controversy. It seems that with UBC, as with many other things in life, the old saying applies: "The devil is in the details."

For in one way or another, the source of the controversy comes down to one issue: the varying interpretations and degrees of importance that different people attribute to each of the project goals. For example, some people regard retention of the current grade 2 system as an absolute requirement, so that not even a few of the 189 contractions should be modified or dropped in order to remove ambiguities. Quite early, the UBC design committee recognized that each of the project goals, even the rather central one calling for nonambiguity, had to be regarded constructively rather than absolutely, if the work was even to be possible. But not everyone sees it that way.

Some of the more interesting and important examples of this effect are implicit in the last of the stated goals: "UBC is to be usable by both beginners and advanced users." As in other respects, the design committee believes that it has met both parts of this goal, having provided for technical symbology that is typical of very advanced levels of study, but in a way that remains consistent from the earliest stages of reading. However, the committee has also felt it necessary to consider the other stated goals and also where the greatest needs lie -- that is, where the most people will be using the code most of the time. It may be said that such considerations have caused the committee to lean, where it was necessary to lean one way or the other, more towards the "beginners", or more precisely towards general readers and "learners", rather than towards the "experienced professionals" in advanced subjects.

An example may help clarify the kind of leaning that I am talking about. In many kinds of mathematical and scientific notation, including Chemistry as one obvious example, numbers that immediately follow letters are quite likely to be in the subscript position. For that reason, existing braille codes that are designed for technical notation tend to optimize for that case. In BANA's mathematics code (Nemeth code), for instance, numbers written immediately after letters, without any intervening indicator, are implicitly in the subscript position. That of course means that a special indicator is needed to represent digits that are directly in line with preceding letters -- such as in catalog part numbers and similar designations that are common in literary context. In order to keep things simple, and to keep faith with both kinds of notation and the other project goals, UBC in its current (and not necessarily final) proposal requires an explicit "subscript" indicator in all cases where a subscript is used, even in cases where practically every number is a subscript, as in a chemical equation.

The simplicity and consistency of that approach appeals to many people, because it means that the occasional chemical formula, which we all encounter in all kinds of contexts, is easily and accurately readable without any new learning. It also fosters learning, especially at the early stages, even about Chemistry, because the student using braille need not cope with some new way of understanding the notation itself. Rather, just like the student using print, his essential task will be to learn the subject matter, that is the meaning behind the notation. But predictably -- and understandably -- those people who regard Chemistry as their life's work are less enthusiastic about the prospect of writing and reading what they perceive to be great numbers of subscript indicators in chemical formulae, all just to avoid what they perceive to be relatively few indicators in catalog part numbers. Such perceptions, incidentally, may or may not be accurate in a given case -- we humans are notoriously prone to a lot of subjective skew when it comes to estimating statistics -- but in any event it is perceptions that matter when it comes to making judgments.

So does this mean that UBC has reached an inherent contradiction, a dead-end from which there is no escape? Not at all, in my opinion, but it does mean that we may have to be clearer about some of the limitations that any practical UBC is likely to have, as well as its benefits -- not to "oversell" the concept, in other words, lest we unconsciously encourage expectations that are unlikely to be met. In particular, we may need to contemplate the possibility that UBC may not totally eliminate all private and otherwise specialized braille codes. Rather, I believe, UBC will become the broad-spectrum "publishing" code that everyone will be able to read and write for just about every purpose, even if it is not necessarily what a professional always uses for private notes and direct notational work in his own specialty. By thus occupying more of the ground, so to speak, UBC will mean that other braille codes are likely to be even more specialized than they are now, but not eliminated altogether.

We should not be surprised, for instance, to see a Chemistry Specialist's Code evolve that takes full advantage of the bias towards subscripts and other predictable attributes of chemical formulae, in other words is optimized so that the balancing of a chemical equation can be carried out without working around indicators that are really there for the benefit of other disciplines and the wider world. No doubt some such specialist's codes will start out simply as private codes, and no doubt they will borrow much from current specialty codes. But also, as I hope and expect will happen as UBC becomes established, specialty codes are likely to borrow a great deal from UBC itself, that is to remain as compatible as possible with regular UBC as is consistent with the specialty discipline's needs. In a sense they may thus be regarded as variant extensions to UBC rather than as contradictory codes.

In fact, the current UBC proposal can be said to anticipate and enable such a trend. It is not hard to imagine that most users will simply omit most "grade 1 indicators" from their private notes, for instance, thereby working in an instantly available "shorthand". Furthermore, one aspect of the current proposal, called the Alignment Mode, may even be regarded as the first of the compatible specialty codes -- designed as it is to permit efficiency when manually carrying out aligned arithmetic operations on hexadecimal numbers (something that computer programmers may occasionally do, though not very commonly in my experience). When you think of it, even grade 2 can be considered a kind of specialty code -- optimized only for nontechnical prose -- but where the "specialty" notation is so often of interest to so many people that UBC already provides for it.

It may seem shocking to be expecting, even planning for, the continued existence of specialty codes while at the same time working towards a "unified" code. But it need not surprise us at all, if we consider what happens in practically all walks of life, for users of print as well as users of braille. When writing notes that are only for one's own later reference, how much do any of us pay any attention to the rules of capitalization, punctuation, spelling and grammar that normally apply to published writing? Very little, if my own notes in preparation for this paper are as typical as I believe them to be. Ad-hoc shortcuts of all kinds abound, flowing naturally from the writer's own familiarity with the subject matter. While such private "codes" usually remain informal and peculiar to the individual, there are similar though more formalized codes that tend to evolve for the use of larger groups. Examples would be the shorthand notations for chess moves and knitting instructions that are used by and for people who are already knowledgeable in those subjects. The tendency to create such shorthands is a natural one, and need not be feared, or forbidden, or controlled. It arises from the desire to be very efficient, one may even say focused, when working exclusively in a relatively narrow and well-known subject. In such cases, the strongly constrained context allows efficiencies that are simply impossible to match in a broader notation system, and so a specialty code is born. This tendency is the same for braille, although the break-points where specialty codes arise are not necessarily the same, for the simple reason that the mechanics -- the size of the basic characters, and their more limited number -- are different. In any case, any additional learning or other complexities associated with a specialty code will be experienced mainly by persons already skilled and actively working in that specialty, and common sense suggests that those are the very people who are the most able as well as the most motivated to deal with those complexities.

This is by no means a forecast that specialty codes will become so numerous or extensively used that the situation will be worse than at present. On the contrary, the broad expressiveness of UBC is bound to reduce their use to cases where the need for special efficiency is strongly felt, and those are not likely to be common. And the initial estimates on the efficiency of UBC itself are surprisingly encouraging -- for sufficiently large samples, it should not be very different from that of today's braille codes.

In summary, UBC itself is not an absolute, no more so than any of its individual goals. It will not solve all problems, nor cause all specialty codes to disappear. But it will still bring about enormous improvements in the production and use of braille, and that is well worth doing.


Copyright Duxbury Systems, Inc. Thursday, July 25, 2013

Duxbury Systems, Inc. website