Most other common encodings are byte-width, e.g. latin-1/-15 or windows-1252, depending on what operating system one is using.
If you still have text files in such encodings, there are several possibilities:
- convert all text files to utf8
- keep the text files with the current encoding, but convert the characters to unicode when reading the file, and back to the other encoding when writing the file (Java does this for example), see below.
- work with the current encoding, without converting anything. You can store such strings as char or ubyte, and process them as normal. What you have to be careful about is, to not use standard library functions on these strings, which were made for utf8 data (treat them as binary data when reading/writing to files or the console, see next section).
You cannot use writefln for this, because you will get an "invalid utf8-sequence" error. You have to use a lower-level function.
This function can be used to convert latin1 to unicode. It is easy, because latin1 and unicode share the same first 256 codepoints.
Resulting dchars allow for easy character manipulation in the program (one dchar = one character).
similar, to be done
? to/from Utf8 conversion (reading/writing textfiles on Windows)
Latin1 (=ISO 8859-1) is similar to some other encodings, e.g. Latin-9 (=ISO 8859-15) mainly has the Euro (€)-character added. Windows uses special encodings like win-1252, which also have some differences. Windows api calls should be used to convert these code pages to Unicode (see above).
- http://en.wikipedia.org/wiki/Windows-1252 (aka cp1252)