At the last Unicode Conference in October, Computer Science professor Jiangping Wang gave a good talk about how to train new programmers (especially those in the U.S.) how to program software which can easily use Unicode.
One issue Dr Wang mentioned is that when encoding is taught in traditional computer science programs, it is very brief and the topic sticks to ASCII only. This is obviously problematic since encoding had extended beyond ASCII since the 1980s. Another problem is that ASCII encoding isn’t as complex as Unicode encoding.
Unicode isn’t just about expanding the set, but understanding how additional typographic issues. For instance Unicode contains characters which control text direction (Left or right) which is not found in ASCII. In addition, Unicode can be presented in “several flavors” such as UTF-8, UTF-16 and so forth. ASCII also had a few national variants, but it was never dependant on byte order like Unicode is.
Of course Dr. Wang was “preaching to the choir” at Unicode 31 – WE all know how important proper Unicode support is. The real challenge is convincing others that Unicode is really the true wave of the future.
Will this ever happen? Actually, one thing that will probably accelerate adoption of Unicode is developing online Web 2.0 technologies. Those companies who want their tools to reach a global audience (e.g. Google, Yahoo, del.cio.us, Twitter) are building in Unicode support from the start. That way, anyone from Japan to Russia can tag their custom maps with their native characters.
I don’t know about you, but nothing makes me feel more connected to the world wide web than seeing a Twitter posting in Cyrillic.

Share →

Leave a Reply

Skip to toolbar