Go to the first, previous, next, last section, table of contents.


Character Set Handling

@ifnottex @macro cal{text} \text\

Character sets used in the early days of computing had only six, seven, or eight bits for each character: there was never a case where more than eight bits (one byte) were used to represent a single character. The limitations of this approach became more apparent as more people grappled with non-Roman character sets, where not all the characters that make up a language's character set can be represented by @math{2^8} choices. This chapter shows the functionality which was added to the C library to support multiple character sets.

Introduction to Extended Characters

A variety of solutions to overcome the differences between character sets with a 1:1 relation between bytes and characters and character sets with ratios of 2:1 or 4:1 exist. The remainder of this section gives a few examples to help understand the design decisions made while developing the functionality of the C library.

A distinction we have to make right away is between internal and external representation. Internal representation means the representation used by a program while keeping the text in memory. External representations are used when text is stored or transmitted through whatever communication channel. Examples of external representations include files lying in a directory that are going to be read and parsed.

Traditionally there has been no difference between the two representations. It was equally comfortable and useful to use the same single-byte representation internally and externally. This changes with more and larger character sets.

One of the problems to overcome with the internal representation is handling text that is externally encoded using different character sets. Assume a program which reads two texts and compares them using some metric. The comparison can be usefully done only if the texts are internally kept in a common format.

For such a common format (@math{=} character set) eight bits are certainly no longer enough. So the smallest entity will have to grow: wide characters will now be used. Instead of one byte, two or four will be used instead. (Three are not good to address in memory and more than four bytes seem not to be necessary).

As shown in some other part of this manual, there exists a completely new family of functions which can handle texts of this kind in memory. The most commonly used character sets for such internal wide character representations are Unicode and ISO 10646 (also known as UCS for Universal Character Set). Unicode was originally planned as a 16-bit character set, whereas ISO 10646 was designed to be a 31-bit large code space. The two standards are practically identical. They have the same character repertoire and code table, but Unicode specifies added semantics. At the moment, only characters in the first 0x10000 code positions (the so-called Basic Multilingual Plane, BMP) have been assigned, but the assignment of more specialized characters outside this 16-bit space is already in progress. A number of encodings have been defined for Unicode and ISO 10646 characters: UCS-2 is a 16-bit word that can only represent characters from the BMP, UCS-4 is a 32-bit word than can represent any Unicode and ISO 10646 character, UTF-8 is an ASCII compatible encoding where ASCII characters are represented by ASCII bytes and non-ASCII characters by sequences of 2-6 non-ASCII bytes, and finally UTF-16 is an extension of UCS-2 in which pairs of certain UCS-2 words can be used to encode non-BMP characters up to 0x10ffff.

To represent wide characters the char type is not suitable. For this reason the ISO C standard introduces a new type which is designed to keep one character of a wide character string. To maintain the similarity there is also a type corresponding to int for those functions which take a single wide character.

Data type: wchar_t
This data type is used as the base type for wide character strings. I.e., arrays of objects of this type are the equivalent of char[] for multibyte character strings. The type is defined in `stddef.h'.

The ISO C90 standard, where this type was introduced, does not say anything specific about the representation. It only requires that this type is capable of storing all elements of the basic character set. Therefore it would be legitimate to define wchar_t as char. This might make sense for embedded systems.

But for GNU systems this type is always 32 bits wide. It is therefore capable of representing all UCS-4 values and therefore covering all of ISO 10646. Some Unix systems define wchar_t as a 16-bit type and thereby follow Unicode very strictly. This is perfectly fine with the standard but it also means that to represent all characters from Unicode and ISO 10646 one has to use UTF-16 surrogate characters which is in fact a multi-wide-character encoding. But this contradicts the purpose of the wchar_t type.

Data type: wint_t
wint_t is a data type used for parameters and variables which contain a single wide character. As the name already suggests it is the equivalent to int when using the normal char strings. The types wchar_t and wint_t have often the same representation if their size if 32 bits wide but if wchar_t is defined as char the type wint_t must be defined as int due to the parameter promotion.

This type is defined in `wchar.h' and got introduced in Amendment 1 to ISO C90.

As there are for the char data type there also exist macros specifying the minimum and maximum value representable in an object of type wchar_t.

Macro: wint_t WCHAR_MIN
The macro WCHAR_MIN evaluates to the minimum value representable by an object of type wint_t.

This macro got introduced in Amendment 1 to ISO C90.

Macro: wint_t WCHAR_MAX
The macro WCHAR_MAX evaluates to the maximum value representable by an object of type wint_t.

This macro got introduced in Amendment 1 to ISO C90.

Another special wide character value is the equivalent to EOF.

Macro: wint_t WEOF
The macro WEOF evaluates to a constant expression of type wint_t whose value is different from any member of the extended character set.

WEOF need not be the same value as EOF and unlike EOF it also need not be negative. I.e., sloppy code like

{
  int c;
  ...
  while ((c = getc (fp)) < 0)
    ...
}

has to be rewritten to explicitly use WEOF when wide characters are used.

{
  wint_t c;
  ...
  while ((c = wgetc (fp)) != WEOF)
    ...
}

This macro was introduced in Amendment 1 to ISO C90 and is defined in `wchar.h'.

These internal representations present problems when it comes to storing and transmittal, since a single wide character consists of more than one byte they are effected by byte-ordering. I.e., machines with different endianesses would see different value accessing the same data. This also applies for communication protocols which are all byte-based and therefore the sender has to decide about splitting the wide character in bytes. A last (but not least important) point is that wide characters often require more storage space than an customized byte oriented character set.

For all the above reasons, an external encoding which is different from the internal encoding is often used if the latter is UCS-2 or UCS-4. The external encoding is byte-based and can be chosen appropriately for the environment and for the texts to be handled. There exist a variety of different character sets which can be used for this external encoding. Information which will not be exhaustively presented here--instead, a description of the major groups will suffice. All of the ASCII-based character sets [_bkoz_: do you mean Roman character sets? If not, what do you mean here?] fulfill one requirement: they are "filesystem safe". This means that the character '/' is used in the encoding only to represent itself. Things are a bit different for character sets like EBCDIC (Extended Binary Coded Decimal Interchange Code, a character set family used by IBM) but if the operation system does not understand EBCDIC directly the parameters to system calls have to be converted first anyhow.

The question remaining is: how to select the character set or encoding to use. The answer: you cannot decide about it yourself, it is decided by the developers of the system or the majority of the users. Since the goal is interoperability one has to use whatever the other people one works with use. If there are no constraints the selection is based on the requirements the expected circle of users will have. I.e., if a project is expected to only be used in, say, Russia it is fine to use KOI8-R or a similar character set. But if at the same time people from, say, Greece are participating one should use a character set which allows all people to collaborate.

The most widely useful solution seems to be: go with the most general character set, namely ISO 10646. Use UTF-8 as the external encoding and problems about users not being able to use their own language adequately are a thing of the past.

One final comment about the choice of the wide character representation is necessary at this point. We have said above that the natural choice is using Unicode or ISO 10646. This is not required, but at least encouraged, by the ISO C standard. The standard defines at least a macro __STDC_ISO_10646__ that is only defined on systems where the wchar_t type encodes ISO 10646 characters. If this symbol is not defined one should as much as possible avoid making assumption about the wide character representation. If the programmer uses only the functions provided by the C library to handle wide character strings there should not be any compatibility problems with other systems.

Overview about Character Handling Functions

A Unix C library contains three different sets of functions in two families to handle character set conversion. The one function family is specified in the ISO C standard and therefore is portable even beyond the Unix world.

The most commonly known set of functions, coming from the ISO C90 standard, is unfortunately the least useful one. In fact, these functions should be avoided whenever possible, especially when developing libraries (as opposed to applications).

The second family of functions got introduced in the early Unix standards (XPG2) and is still part of the latest and greatest Unix standard: Unix 98. It is also the most powerful and useful set of functions. But we will start with the functions defined in Amendment 1 to ISO C90.

Restartable Multibyte Conversion Functions

The ISO C standard defines functions to convert strings from a multibyte representation to wide character strings. There are a number of peculiarities:

Despite these limitations the ISO C functions can very well be used in many contexts. In graphical user interfaces, for instance, it is not uncommon to have functions which require text to be displayed in a wide character string if it is not simple ASCII. The text itself might come from a file with translations and the user should decide about the current locale which determines the translation and therefore also the external encoding used. In such a situation (and many others) the functions described here are perfect. If more freedom while performing the conversion is necessary take a look at the iconv functions (see section Generic Charset Conversion).

Selecting the conversion and its properties

We already said above that the currently selected locale for the LC_CTYPE category decides about the conversion which is performed by the functions we are about to describe. Each locale uses its own character set (given as an argument to localedef) and this is the one assumed as the external multibyte encoding. The wide character character set always is UCS-4, at least on GNU systems.

A characteristic of each multibyte character set is the maximum number of bytes which can be necessary to represent one character. This information is quite important when writing code which uses the conversion functions. In the examples below we will see some examples. The ISO C standard defines two macros which provide this information.

Macro: int MB_LEN_MAX
This macro specifies the maximum number of bytes in the multibyte sequence for a single character in any of the supported locales. It is a compile-time constant and it is defined in `limits.h'.

Macro: int MB_CUR_MAX
MB_CUR_MAX expands into a positive integer expression that is the maximum number of bytes in a multibyte character in the current locale. The value is never greater than MB_LEN_MAX. Unlike MB_LEN_MAX this macro need not be a compile-time constant and in fact, in the GNU C library it is not.

MB_CUR_MAX is defined in `stdlib.h'.

Two different macros are necessary since strictly ISO C90 compilers do not allow variable length array definitions but still it is desirable to avoid dynamic allocation. This incomplete piece of code shows the problem:

{
  char buf[MB_LEN_MAX];
  ssize_t len = 0;

  while (! feof (fp))
    {
      fread (&buf[len], 1, MB_CUR_MAX - len, fp);
      /* ... process buf */
      len -= used;
    }
}

The code in the inner loop is expected to have always enough bytes in the array buf to convert one multibyte character. The array buf has to be sized statically since many compilers do not allow a variable size. The fread call makes sure that always MB_CUR_MAX bytes are available in buf. Note that it isn't a problem if MB_CUR_MAX is not a compile-time constant.

Representing the state of the conversion

In the introduction of this chapter it was said that certain character sets use a stateful encoding. I.e., the encoded values depend in some way on the previous bytes in the text.

Since the conversion functions allow converting a text in more than one step we must have a way to pass this information from one call of the functions to another.

Data type: mbstate_t
A variable of type mbstate_t can contain all the information about the shift state needed from one call to a conversion function to another.

This type is defined in `wchar.h'. It got introduced in Amendment 1 to ISO C90.

To use objects of this type the programmer has to define such objects (normally as local variables on the stack) and pass a pointer to the object to the conversion functions. This way the conversion function can update the object if the current multibyte character set is stateful.

There is no specific function or initializer to put the state object in any specific state. The rules are that the object should always represent the initial state before the first use and this is achieved by clearing the whole variable with code such as follows:

{
  mbstate_t state;
  memset (&state, '\0', sizeof (state));
  /* from now on state can be used.  */
  ...
}

When using the conversion functions to generate output it is often necessary to test whether the current state corresponds to the initial state. This is necessary, for example, to decide whether or not to emit escape sequences to set the state to the initial state at certain sequence points. Communication protocols often require this.

Function: int mbsinit (const mbstate_t *ps)
This function determines whether the state object pointed to by ps is in the initial state or not. If ps is a null pointer or the object is in the initial state the return value is nonzero. Otherwise it is zero.

This function was introduced in Amendment 1 to ISO C90 and is declared in `wchar.h'.

Code using this function often looks similar to this:

{
  mbstate_t state;
  memset (&state, '\0', sizeof (state));
  /* Use state.  */
  ...
  if (! mbsinit (&state))
    {
      /* Emit code to return to initial state.  */
      const wchar_t empty[] = L"";
      const wchar_t *srcp = empty;
      wcsrtombs (outbuf, &srcp, outbuflen, &state);
    }
  ...
}

The code to emit the escape sequence to get back to the initial state is interesting. The wcsrtombs function can be used to determine the necessary output code (see section Converting Multibyte and Wide Character Strings). Please note that on GNU systems it is not necessary to perform this extra action for the conversion from multibyte text to wide character text since the wide character encoding is not stateful. But there is nothing mentioned in any standard which prohibits making wchar_t using a stateful encoding.

Converting Single Characters

The most fundamental of the conversion functions are those dealing with single characters. Please note that this does not always mean single bytes. But since there is very often a subset of the multibyte character set which consists of single byte sequences there are functions to help with converting bytes. One very important and often applicable scenario is where ASCII is a subpart of the multibyte character set. I.e., all ASCII characters stand for itself and all other characters have at least a first byte which is beyond the range @math{0} to @math{127}.

Function: wint_t btowc (int c)
The btowc function ("byte to wide character") converts a valid single byte character c in the initial shift state into the wide character equivalent using the conversion rules from the currently selected locale of the LC_CTYPE category.

If (unsigned char) c is no valid single byte multibyte character or if c is EOF the function returns WEOF.

Please note the restriction of c being tested for validity only in the initial shift state. There is no mbstate_t object used from which the state information is taken and the function also does not use any static state.

This function was introduced in Amendment 1 to ISO C90 and is declared in `wchar.h'.

Despite the limitation that the single byte value always is interpreted in the initial state this function is actually useful most of the time. Most characters are either entirely single-byte character sets or they are extension to ASCII. But then it is possible to write code like this (not that this specific example is very useful):

wchar_t *
itow (unsigned long int val)
{
  static wchar_t buf[30];
  wchar_t *wcp = &buf[29];
  *wcp = L'\0';
  while (val != 0)
    {
      *--wcp = btowc ('0' + val % 10);
      val /= 10;
    }
  if (wcp == &buf[29])
    *--wcp = L'0';
  return wcp;
}

Why is it necessary to use such a complicated implementation and not simply cast '0' + val % 10 to a wide character? The answer is that there is no guarantee that one can perform this kind of arithmetic on the character of the character set used for wchar_t representation. In other situations the bytes are not constant at compile time and so the compiler cannot do the work. In situations like this it is necessary btowc.

There also is a function for the conversion in the other direction.

Function: int wctob (wint_t c)
The wctob function ("wide character to byte") takes as the parameter a valid wide character. If the multibyte representation for this character in the initial state is exactly one byte long the return value of this function is this character. Otherwise the return value is EOF.

This function was introduced in Amendment 1 to ISO C90 and is declared in `wchar.h'.

There are more general functions to convert single character from multibyte representation to wide characters and vice versa. These functions pose no limit on the length of the multibyte representation and they also do not require it to be in the initial state.

Function: size_t mbrtowc (wchar_t *restrict pwc, const char *restrict s, size_t n, mbstate_t *restrict ps)
The mbrtowc function ("multibyte restartable to wide character") converts the next multibyte character in the string pointed to by s into a wide character and stores it in the wide character string pointed to by pwc. The conversion is performed according to the locale currently selected for the LC_CTYPE category. If the conversion for the character set used in the locale requires a state the multibyte string is interpreted in the state represented by the object pointed to by ps. If ps is a null pointer, a static, internal state variable used only by the mbrtowc function is used.

If the next multibyte character corresponds to the NUL wide character the return value of the function is @math{0} and the state object is afterwards in the initial state. If the next n or fewer bytes form a correct multibyte character the return value is the number of bytes starting from s which form the multibyte character. The conversion state is updated according to the bytes consumed in the conversion. In both cases the wide character (either the L'\0' or the one found in the conversion) is stored in the string pointer to by pwc iff pwc is not null.

If the first n bytes of the multibyte string possibly form a valid multibyte character but there are more than n bytes needed to complete it the return value of the function is (size_t) -2 and no value is stored. Please note that this can happen even if n has a value greater or equal to MB_CUR_MAX since the input might contain redundant shift sequences.

If the first n bytes of the multibyte string cannot possibly form a valid multibyte character also no value is stored, the global variable errno is set to the value EILSEQ and the function returns (size_t) -1. The conversion state is afterwards undefined.

This function was introduced in Amendment 1 to ISO C90 and is declared in `wchar.h'.

Using this function is straight forward. A function which copies a multibyte string into a wide character string while at the same time converting all lowercase character into uppercase could look like this (this is not the final version, just an example; it has no error checking, and leaks sometimes memory):

wchar_t *
mbstouwcs (const char *s)
{
  size_t len = strlen (s);
  wchar_t *result = malloc ((len + 1) * sizeof (wchar_t));
  wchar_t *wcp = result;
  wchar_t tmp[1];
  mbstate_t state;
  size_t nbytes;

  memset (&state, '\0', sizeof (state));
  while ((nbytes = mbrtowc (tmp, s, len, &state)) > 0)
    {
      if (nbytes >= (size_t) -2)
        /* Invalid input string.  */
        return NULL;
      *result++ = towupper (tmp[0]);
      len -= nbytes;
      s += nbytes;
    }
  return result;
}

The use of mbrtowc should be clear. A single wide character is stored in tmp[0] and the number of consumed bytes is stored in the variable nbytes. In case the the conversion was successful the uppercase variant of the wide character is stored in the result array and the pointer to the input string and the number of available bytes is adjusted.

The only non-obvious thing about the function might be the way memory is allocated for the result. The above code uses the fact that there can never be more wide characters in the converted results than there are bytes in the multibyte input string. This method yields to a pessimistic guess about the size of the result and if many wide character strings have to be constructed this way or the strings are long, the extra memory required allocated because the input string contains multibyte characters might be significant. It would be possible to resize the allocated memory block to the correct size before returning it. A better solution might be to allocate just the right amount of space for the result right away. Unfortunately there is no function to compute the length of the wide character string directly from the multibyte string. But there is a function which does part of the work.

Function: size_t mbrlen (const char *restrict s, size_t n, mbstate_t *ps)
The mbrlen function ("multibyte restartable length") computes the number of at most n bytes starting at s which form the next valid and complete multibyte character.

If the next multibyte character corresponds to the NUL wide character the return value is @math{0}. If the next n bytes form a valid multibyte character the number of bytes belonging to this multibyte character byte sequence is returned.

If the the first n bytes possibly form a valid multibyte character but it is incomplete the return value is (size_t) -2. Otherwise the multibyte character sequence is invalid and the return value is (size_t) -1.

The multibyte sequence is interpreted in the state represented by the object pointed to by ps. If ps is a null pointer, a state object local to mbrlen is used.

This function was introduced in Amendment 1 to ISO C90 and is declared in `wchar.h'.

The tentative reader now will of course note that mbrlen can be implemented as

mbrtowc (NULL, s, n, ps != NULL ? ps : &internal)

This is true and in fact is mentioned in the official specification. Now, how can this function be used to determine the length of the wide character string created from a multibyte character string? It is not directly usable but we can define a function mbslen using it:

size_t
mbslen (const char *s)
{
  mbstate_t state;
  size_t result = 0;
  size_t nbytes;
  memset (&state, '\0', sizeof (state));
  while ((nbytes = mbrlen (s, MB_LEN_MAX, &state)) > 0)
    {
      if (nbytes >= (size_t) -2)
        /* Something is wrong.  */
        return (size_t) -1;
      s += nbytes;
      ++result;
    }
  return result;
}

This function simply calls mbrlen for each multibyte character in the string and counts the number of function calls. Please note that we here use MB_LEN_MAX as the size argument in the mbrlen call. This is OK since a) this value is larger then the length of the longest multibyte character sequence and b) because we know that the string s ends with a NUL byte which cannot be part of any other multibyte character sequence but the one representing the NUL wide character. Therefore the mbrlen function will never read invalid memory.

Now that this function is available (just to make this clear, this function is not part of the GNU C library) we can compute the number of wide character required to store the converted multibyte character string s using

wcs_bytes = (mbslen (s) + 1) * sizeof (wchar_t);

Please note that the mbslen function is quite inefficient. The implementation of mbstouwcs implemented using mbslen would have to perform the conversion of the multibyte character input string twice and this conversion might be quite expensive. So it is necessary to think about the consequences of using the easier but imprecise method before doing the work twice.

Function: size_t wcrtomb (char *restrict s, wchar_t wc, mbstate_t *restrict ps)
The wcrtomb function ("wide character restartable to multibyte") converts a single wide character into a multibyte string corresponding to that wide character.

If s is a null pointer the function resets the the state stored in the objects pointer to by ps (or the internal mbstate_t object) to the initial state. This can also be achieved by a call like this:

wcrtombs (temp_buf, L'\0', ps)

since if s is a null pointer wcrtomb performs as if it writes into an internal buffer which is guaranteed to be large enough.

If wc is the NUL wide character wcrtomb emits, if necessary, a shift sequence to get the state ps into the initial state followed by a single NUL byte is stored in the string s.

Otherwise a byte sequence (possibly including shift sequences) is written into the string s. This of only happens if wc is a valid wide character, i.e., it has a multibyte representation in the character set selected by locale of the LC_CTYPE category. If wc is no valid wide character nothing is stored in the strings s, errno is set to EILSEQ, the conversion state in ps is undefined and the return value is (size_t) -1.

If no error occurred the function returns the number of bytes stored in the string s. This includes all byte representing shift sequences.

One word about the interface of the function: there is no parameter specifying the length of the array s. Instead the function assumes that there are at least MB_CUR_MAX bytes available since this is the maximum length of any byte sequence representing a single character. So the caller has to make sure that there is enough space available, otherwise buffer overruns can occur.

This function was introduced in Amendment 1 to ISO C90 and is declared in `wchar.h'.

Using this function is as easy as using mbrtowc. The following example appends a wide character string to a multibyte character string. Again, the code is not really useful (and correct), it is simply here to demonstrate the use and some problems.

char *
mbscatwcs (char *s, size_t len, const wchar_t *ws)
{
  mbstate_t state;
  /* Find the end of the existing string.  */
  char *wp = strchr (s, '\0');
  len -= wp - s;
  memset (&state, '\0', sizeof (state));
  do
    {
      size_t nbytes;
      if (len < MB_CUR_LEN)
        {
          /* We cannot guarantee that the next
             character fits into the buffer, so
             return an error.  */
          errno = E2BIG;
          return NULL;
        }
      nbytes = wcrtomb (wp, *ws, &state);
      if (nbytes == (size_t) -1)
        /* Error in the conversion.  */
        return NULL;
      len -= nbytes;
      wp += nbytes;
    }
  while (*ws++ != L'\0');
  return s;
}

First the function has to find the end of the string currently in the array s. The strchr call does this very efficiently since a requirement for multibyte character representations is that the NUL byte never is used except to represent itself (and in this context, the end of the string).

After initializing the state object the loop is entered where the first task is to make sure there is enough room in the array s. We abort if there are not at least MB_CUR_LEN bytes available. This is not always optimal but we have no other choice. We might have less than MB_CUR_LEN bytes available but the next multibyte character might also be only one byte long. At the time the wcrtomb call returns it is too late to decide whether the buffer was large enough or not. If this solution is really unsuitable there is a very slow but more accurate solution.

  ...
  if (len < MB_CUR_LEN)
    {
      mbstate_t temp_state;
      memcpy (&temp_state, &state, sizeof (state));
      if (wcrtomb (NULL, *ws, &temp_state) > len)
        {
          /* We cannot guarantee that the next
             character fits into the buffer, so
             return an error.  */
          errno = E2BIG;
          return NULL;
        }
    }
  ...

Here we do perform the conversion which might overflow the buffer so that we are afterwards in the position to make an exact decision about the buffer size. Please note the NULL argument for the destination buffer in the new wcrtomb call; since we are not interested in the converted text at this point this is a nice way to express this. The most unusual thing about this piece of code certainly is the duplication of the conversion state object. But think about this: if a change of the state is necessary to emit the next multibyte character we want to have the same shift state change performed in the real conversion. Therefore we have to preserve the initial shift state information.

There are certainly many more and even better solutions to this problem. This example is only meant for educational purposes.

Converting Multibyte and Wide Character Strings

The functions described in the previous section only convert a single character at a time. Most operations to be performed in real-world programs include strings and therefore the ISO C standard also defines conversions on entire strings. However, the defined set of functions is quite limited, thus the GNU C library contains a few extensions which can help in some important situations.

Function: size_t mbsrtowcs (wchar_t *restrict dst, const char **restrict src, size_t len, mbstate_t *restrict ps)
The mbsrtowcs function ("multibyte string restartable to wide character string") converts an NUL terminated multibyte character string at *src into an equivalent wide character string, including the NUL wide character at the end. The conversion is started using the state information from the object pointed to by ps or from an internal object of mbsrtowcs if ps is a null pointer. Before returning the state object to match the state after the last converted character. The state is the initial state if the terminating NUL byte is reached and converted.

If dst is not a null pointer the result is stored in the array pointed to by dst, otherwise the conversion result is not available since it is stored in an internal buffer.

If len wide characters are stored in the array dst before reaching the end of the input string the conversion stops and len is returned. If dst is a null pointer len is never checked.

Another reason for a premature return from the function call is if the input string contains an invalid multibyte sequence. In this case the global variable errno is set to EILSEQ and the function returns (size_t) -1.

In all other cases the function returns the number of wide characters converted during this call. If dst is not null mbsrtowcs stores in the pointer pointed to by src a null pointer (if the NUL byte in the input string was reached) or the address of the byte following the last converted multibyte character.

This function was introduced in Amendment 1 to ISO C90 and is declared in `wchar.h'.

The definition of this function has one limitation which has to be understood. The requirement that dst has to be a NUL terminated string provides problems if one wants to convert buffers with text. A buffer is normally no collection of NUL terminated strings but instead a continuous collection of lines, separated by newline characters. Now assume a function to convert one line from a buffer is needed. Since the line is not NUL terminated the source pointer cannot directly point into the unmodified text buffer. This means, either one inserts the NUL byte at the appropriate place for the time of the mbsrtowcs function call (which is not doable for a read-only buffer or in a multi-threaded application) or one copies the line in an extra buffer where it can be terminated by a NUL byte. Note that it is not in general possible to limit the number of characters to convert by setting the parameter len to any specific value. Since it is not known how many bytes each multibyte character sequence is in length one always could do only a guess.

There is still a problem with the method of NUL-terminating a line right after the newline character which could lead to very strange results. As said in the description of the mbsrtowcs function above the conversion state is guaranteed to be in the initial shift state after processing the NUL byte at the end of the input string. But this NUL byte is not really part of the text. I.e., the conversion state after the newline in the original text could be something different than the initial shift state and therefore the first character of the next line is encoded using this state. But the state in question is never accessible to the user since the conversion stops after the NUL byte (which resets the state). Most stateful character sets in use today require that the shift state after a newline is the initial state--but this is not a strict guarantee. Therefore simply NUL terminating a piece of a running text is not always an adequate solution and therefore never should be used in generally used code.

The generic conversion interface (see section Generic Charset Conversion) does not have this limitation (it simply works on buffers, not strings), and the GNU C library contains a set of functions which take additional parameters specifying the maximal number of bytes which are consumed from the input string. This way the problem of mbsrtowcs's example above could be solved by determining the line length and passing this length to the function.

Function: size_t wcsrtombs (char *restrict dst, const wchar_t **restrict src, size_t len, mbstate_t *restrict ps)
The wcsrtombs function ("wide character string restartable to multibyte string") converts the NUL terminated wide character string at *src into an equivalent multibyte character string and stores the result in the array pointed to by dst. The NUL wide character is also converted. The conversion starts in the state described in the object pointed to by ps or by a state object locally to wcsrtombs in case ps is a null pointer. If dst is a null pointer the conversion is performed as usual but the result is not available. If all characters of the input string were successfully converted and if dst is not a null pointer the pointer pointed to by src gets assigned a null pointer.

If one of the wide characters in the input string has no valid multibyte character equivalent the conversion stops early, sets the global variable errno to EILSEQ, and returns (size_t) -1.

Another reason for a premature stop is if dst is not a null pointer and the next converted character would require more than len bytes in total to the array dst. In this case (and if dest is not a null pointer) the pointer pointed to by src is assigned a value pointing to the wide character right after the last one successfully converted.

Except in the case of an encoding error the return value of the function is the number of bytes in all the multibyte character sequences stored in dst. Before returning the state in the object pointed to by ps (or the internal object in case ps is a null pointer) is updated to reflect the state after the last conversion. The state is the initial shift state in case the terminating NUL wide character was converted.

This function was introduced in Amendment 1 to ISO C90 and is declared in `wchar.h'.

The restriction mentions above for the mbsrtowcs function applies also here. There is no possibility to directly control the number of input characters. One has to place the NUL wide character at the correct place or control the consumed input indirectly via the available output array size (the len parameter).

Function: size_t mbsnrtowcs (wchar_t *restrict dst, const char **restrict src, size_t nmc, size_t len, mbstate_t *restrict ps)
The mbsnrtowcs function is very similar to the mbsrtowcs function. All the parameters are the same except for nmc which is new. The return value is the same as for mbsrtowcs.

This new parameter specifies how many bytes at most can be used from the multibyte character string. I.e., the multibyte character string *src need not be NUL terminated. But if a NUL byte is found within the nmc first bytes of the string the conversion stops here.

This function is a GNU extensions. It is meant to work around the problems mentioned above. Now it is possible to convert buffer with multibyte character text piece for piece without having to care about inserting NUL bytes and the effect of NUL bytes on the conversion state.

A function to convert a multibyte string into a wide character string and display it could be written like this (this is not a really useful example):

void
showmbs (const char *src, FILE *fp)
{
  mbstate_t state;
  int cnt = 0;
  memset (&state, '\0', sizeof (state));
  while (1)
    {
      wchar_t linebuf[100];
      const char *endp = strchr (src, '\n');
      size_t n;

      /* Exit if there is no more line.  */
      if (endp == NULL)
        break;

      n = mbsnrtowcs (linebuf, &src, endp - src, 99, &state);
      linebuf[n] = L'\0';
      fprintf (fp, "line %d: \"%S\"\n", linebuf);
    }
}

There is no problem with the state after a call to mbsnrtowcs. Since we don't insert characters in the strings which were not in there right from the beginning and we use state only for the conversion of the given buffer there is no problem with altering the state.

Function: size_t wcsnrtombs (char *restrict dst, const wchar_t **restrict src, size_t nwc, size_t len, mbstate_t *restrict ps)
The wcsnrtombs function implements the conversion from wide character strings to multibyte character strings. It is similar to wcsrtombs but it takes, just like mbsnrtowcs, an extra parameter which specifies the length of the input string.

No more than nwc wide characters from the input string *src are converted. If the input string contains a NUL wide character in the first nwc character to conversion stops at this place.

This function is a GNU extension and just like mbsnrtowcs is helps in situations where no NUL terminated input strings are available.

A Complete Multibyte Conversion Example

The example programs given in the last sections are only brief and do not contain all the error checking etc. Presented here is a complete and documented example. It features the mbrtowc function but it should be easy to derive versions using the other functions.

int
file_mbsrtowcs (int input, int output)
{
  /* Note the use of MB_LEN_MAX.
     MB_CUR_MAX cannot portably be used here.  */
  char buffer[BUFSIZ + MB_LEN_MAX];
  mbstate_t state;
  int filled = 0;
  int eof = 0;

  /* Initialize the state.  */
  memset (&state, '\0', sizeof (state));

  while (!eof)
    {
      ssize_t nread;
      ssize_t nwrite;
      char *inp = buffer;
      wchar_t outbuf[BUFSIZ];
      wchar_t *outp = outbuf;

      /* Fill up the buffer from the input file.  */
      nread = read (input, buffer + filled, BUFSIZ);
      if (nread < 0)
        {
          perror ("read");
          return 0;
        }
      /* If we reach end of file, make a note to read no more. */
      if (nread == 0)
        eof = 1;

      /* filled is now the number of bytes in buffer. */
      filled += nread;

      /* Convert those bytes to wide characters--as many as we can. */
      while (1)
        {
          size_t thislen = mbrtowc (outp, inp, filled, &state);
          /* Stop converting at invalid character;
             this can mean we have read just the first part
             of a valid character.  */
          if (thislen == (size_t) -1)
            break;
          /* We want to handle embedded NUL bytes
             but the return value is 0.  Correct this.  */
          if (thislen == 0)
            thislen = 1;
          /* Advance past this character. */
          inp += thislen;
          filled -= thislen;
          ++outp;
        }

      /* Write the wide characters we just made.  */
      nwrite = write (output, outbuf,
                      (outp - outbuf) * sizeof (wchar_t));
      if (nwrite < 0)
        {
          perror ("write");
          return 0;
        }

      /* See if we have a real invalid character. */
      if ((eof && filled > 0) || filled >= MB_CUR_MAX)
        {
          error (0, 0, "invalid multibyte character");
          return 0;
        }

      /* If any characters must be carried forward,
         put them at the beginning of buffer. */
      if (filled > 0)
        memmove (inp, buffer, filled);
    }

  return 1;
}

Non-reentrant Conversion Function

The functions described in the last chapter are defined in Amendment 1 to ISO C90. But the original ISO C90 standard also contained functions for character set conversion. The reason that they are not described in the first place is that they are almost entirely useless.

The problem is that all the functions for conversion defined in ISO C90 use a local state. This implies that multiple conversions at the same time (not only when using threads) cannot be done, and that you cannot first convert single characters and then strings since you cannot tell the conversion functions which state to use.

These functions are therefore usable only in a very limited set of situations. One must complete converting the entire string before starting a new one and each string/text must be converted with the same function (there is no problem with the library itself; it is guaranteed that no library function changes the state of any of these functions). For the above reasons it is highly requested that the functions from the last section are used in place of non-reentrant conversion functions.

Non-reentrant Conversion of Single Characters

Function: int mbtowc (wchar_t *restrict result, const char *restrict string, size_t size)
The mbtowc ("multibyte to wide character") function when called with non-null string converts the first multibyte character beginning at string to its corresponding wide character code. It stores the result in *result.

mbtowc never examines more than size bytes. (The idea is to supply for size the number of bytes of data you have in hand.)

mbtowc with non-null string distinguishes three possibilities: the first size bytes at string start with valid multibyte character, they start with an invalid byte sequence or just part of a character, or string points to an empty string (a null character).

For a valid multibyte character, mbtowc converts it to a wide character and stores that in *result, and returns the number of bytes in that character (always at least @math{1}, and never more than size).

For an invalid byte sequence, mbtowc returns @math{-1}. For an empty string, it returns @math{0}, also storing '\0' in *result.

If the multibyte character code uses shift characters, then mbtowc maintains and updates a shift state as it scans. If you call mbtowc with a null pointer for string, that initializes the shift state to its standard initial value. It also returns nonzero if the multibyte character code in use actually has a shift state. See section States in Non-reentrant Functions.

Function: int wctomb (char *string, wchar_t wchar)
The wctomb ("wide character to multibyte") function converts the wide character code wchar to its corresponding multibyte character sequence, and stores the result in bytes starting at string. At most MB_CUR_MAX characters are stored.

wctomb with non-null string distinguishes three possibilities for wchar: a valid wide character code (one that can be translated to a multibyte character), an invalid code, and L'\0'.

Given a valid code, wctomb converts it to a multibyte character, storing the bytes starting at string. Then it returns the number of bytes in that character (always at least @math{1}, and never more than MB_CUR_MAX).

If wchar is an invalid wide character code, wctomb returns @math{-1}. If wchar is L'\0', it returns 0, also storing '\0' in *string.

If the multibyte character code uses shift characters, then wctomb maintains and updates a shift state as it scans. If you call wctomb with a null pointer for string, that initializes the shift state to its standard initial value. It also returns nonzero if the multibyte character code in use actually has a shift state. See section States in Non-reentrant Functions.

Calling this function with a wchar argument of zero when string is not null has the side-effect of reinitializing the stored shift state as well as storing the multibyte character '\0' and returning @math{0}.

Similar to mbrlen there is also a non-reentrant function which computes the length of a multibyte character. It can be defined in terms of mbtowc.

Function: int mblen (const char *string, size_t size)
The mblen function with a non-null string argument returns the number of bytes that make up the multibyte character beginning at string, never examining more than size bytes. (The idea is to supply for size the number of bytes of data you have in hand.)

The return value of mblen distinguishes three possibilities: the first size bytes at string start with valid multibyte character, they start with an invalid byte sequence or just part of a character, or string points to an empty string (a null character).

For a valid multibyte character, mblen returns the number of bytes in that character (always at least 1, and never more than size). For an invalid byte sequence, mblen returns @math{-1}. For an empty string, it returns @math{0}.

If the multibyte character code uses shift characters, then mblen maintains and updates a shift state as it scans. If you call mblen with a null pointer for string, that initializes the shift state to its standard initial value. It also returns a nonzero value if the multibyte character code in use actually has a shift state. See section States in Non-reentrant Functions.

The function mblen is declared in `stdlib.h'.

Non-reentrant Conversion of Strings

For convenience reasons the ISO C90 standard defines also functions to convert entire strings instead of single characters. These functions suffer from the same problems as their reentrant counterparts from Amendment 1 to ISO C90; see section Converting Multibyte and Wide Character Strings.

Function: size_t mbstowcs (wchar_t *wstring, const char *string, size_t size)
The mbstowcs ("multibyte string to wide character string") function converts the null-terminated string of multibyte characters string to an array of wide character codes, storing not more than size wide characters into the array beginning at wstring. The terminating null character counts towards the size, so if size is less than the actual number of wide characters resulting from string, no terminating null character is stored.

The conversion of characters from string begins in the initial shift state.

If an invalid multibyte character sequence is found, this function returns a value of @math{-1}. Otherwise, it returns the number of wide characters stored in the array wstring. This number does not include the terminating null character, which is present if the number is less than size.

Here is an example showing how to convert a string of multibyte characters, allocating enough space for the result.

wchar_t *
mbstowcs_alloc (const char *string)
{
  size_t size = strlen (string) + 1;
  wchar_t *buf = xmalloc (size * sizeof (wchar_t));

  size = mbstowcs (buf, string, size);
  if (size == (size_t) -1)
    return NULL;
  buf = xrealloc (buf, (size + 1) * sizeof (wchar_t));
  return buf;
}

Function: size_t wcstombs (char *string, const wchar_t *wstring, size_t size)
The wcstombs ("wide character string to multibyte string") function converts the null-terminated wide character array wstring into a string containing multibyte characters, storing not more than size bytes starting at string, followed by a terminating null character if there is room. The conversion of characters begins in the initial shift state.

The terminating null character counts towards the size, so if size is less than or equal to the number of bytes needed in wstring, no terminating null character is stored.

If a code that does not correspond to a valid multibyte character is found, this function returns a value of @math{-1}. Otherwise, the return value is the number of bytes stored in the array string. This number does not include the terminating null character, which is present if the number is less than size.

States in Non-reentrant Functions

In some multibyte character codes, the meaning of any particular byte sequence is not fixed; it depends on what other sequences have come earlier in the same string. Typically there are just a few sequences that can change the meaning of other sequences; these few are called shift sequences and we say that they set the shift state for other sequences that follow.

To illustrate shift state and shift sequences, suppose we decide that the sequence 0200 (just one byte) enters Japanese mode, in which pairs of bytes in the range from 0240 to 0377 are single characters, while 0201 enters Latin-1 mode, in which single bytes in the range from 0240 to 0377 are characters, and interpreted according to the ISO Latin-1 character set. This is a multibyte code which has two alternative shift states ("Japanese mode" and "Latin-1 mode"), and two shift sequences that specify particular shift states.

When the multibyte character code in use has shift states, then mblen, mbtowc and wctomb must maintain and update the current shift state as they scan the string. To make this work properly, you must follow these rules:

Here is an example of using mblen following these rules:

void
scan_string (char *s)
{
  int length = strlen (s);

  /* Initialize shift state.  */
  mblen (NULL, 0);

  while (1)
    {
      int thischar = mblen (s, length);
      /* Deal with end of string and invalid characters.  */
      if (thischar == 0)
        break;
      if (thischar == -1)
        {
          error ("invalid multibyte character");
          break;
        }
      /* Advance past this character.  */
      s += thischar;
      length -= thischar;
    }
}

The functions mblen, mbtowc and wctomb are not reentrant when using a multibyte code that uses a shift state. However, no other library functions call these functions, so you don't have to worry that the shift state will be changed mysteriously.

Generic Charset Conversion

The conversion functions mentioned so far in this chapter all had in common that they operate on character sets which are not directly specified by the functions. The multibyte encoding used is specified by the currently selected locale for the LC_CTYPE category. The wide character set is fixed by the implementation (in the case of GNU C library it always is UCS-4 encoded ISO 10646.

This has of course several problems when it comes to general character conversion:

The XPG2 standard defines a completely new set of functions which has none of these limitations. They are not at all coupled to the selected locales and they but no constraints on the character sets selected for source and destination. Only the set of available conversions is limiting them. The standard does not specify that any conversion at all must be available. It is a measure of the quality of the implementation.

In the following text first the interface to iconv, the conversion function, will be described. Comparisons with other implementations will show what pitfalls lie on the way of portable applications. At last, the implementation is described as far as interesting to the advanced user who wants to extend the conversion capabilities.

Generic Character Set Conversion Interface

This set of functions follows the traditional cycle of using a resource: open--use--close. The interface consists of three functions, each of which implement one step.

Before the interfaces are described it is necessary to introduce a datatype. Just like other open--use--close interface the functions introduced here work using a handles and the `iconv.h' header defines a special type for the handles used.

Data Type: iconv_t
This data type is an abstract type defined in `iconv.h'. The user must not assume anything about the definition of this type, it must be completely opaque.

Objects of this type can get assigned handles for the conversions using the iconv functions. The objects themselves need not be freed but the conversions for which the handles stand for have to.

The first step is the function to create a handle.

Function: iconv_t iconv_open (const char *tocode, const char *fromcode)
The iconv_open function has to be used before starting a conversion. The two parameters this function takes determine the source and destination character set for the conversion and if the implementation has the possibility to perform such a conversion the function returns a handle.

If the wanted conversion is not available the function returns (iconv_t) -1. In this case the global variable errno can have the following values:

EMFILE
The process already has OPEN_MAX file descriptors open.
ENFILE
The system limit of open file is reached.
ENOMEM
Not enough memory to carry out the operation.
EINVAL
The conversion from fromcode to tocode is not supported.

It is not possible to use the same descriptor in different threads to perform independent conversions. Within the data structures associated with the descriptor there is information about the conversion state. This must not be messed up by using it in different conversions.

An iconv descriptor is like a file descriptor as for every use a new descriptor must be created. The descriptor does not stand for all of the conversions from fromset to toset.

The GNU C library implementation of iconv_open has one significant extension to other implementations. To ease the extension of the set of available conversions the implementation allows storing the necessary files with data and code in arbitrarily many directories. How this extension has to be written will be explained below (see section The iconv Implementation in the GNU C library). Here it is only important to say that all directories mentioned in the GCONV_PATH environment variable are considered if they contain a file `gconv-modules'. These directories need not necessarily be created by the system administrator. In fact, this extension is introduced to help users writing and using their own, new conversions. Of course this does not work for security reasons in SUID binaries; in this case only the system directory is considered and this normally is `prefix/lib/gconv'. The GCONV_PATH environment variable is examined exactly once at the first call of the iconv_open function. Later modifications of the variable have no effect.

This function got introduced early in the X/Open Portability Guide, version 2. It is supported by all commercial Unices as it is required for the Unix branding. However, the quality and completeness of the implementation varies widely. The function is declared in `iconv.h'.

The iconv implementation can associate large data structure with the handle returned by iconv_open. Therefore it is crucial to free all the resources once all conversions are carried out and the conversion is not needed anymore.

Function: int iconv_close (iconv_t cd)
The iconv_close function frees all resources associated with the handle cd which must have been returned by a successful call to the iconv_open function.

If the function call was successful the return value is @math{0}. Otherwise it is @math{-1} and errno is set appropriately. Defined error are:

EBADF
The conversion descriptor is invalid.

This function was introduced together with the rest of the iconv functions in XPG2 and it is declared in `iconv.h'.

The standard defines only one actual conversion function. This has therefore the most general interface: it allows conversion from one buffer to another. Conversion from a file to a buffer, vice versa, or even file to file can be implemented on top of it.

Function: size_t iconv (iconv_t cd, char **inbuf, size_t *inbytesleft, char **outbuf, size_t *outbytesleft)
The iconv function converts the text in the input buffer according to the rules associated with the descriptor cd and stores the result in the output buffer. It is possible to call the function for the same text several times in a row since for stateful character sets the necessary state information is kept in the data structures associated with the descriptor.

The input buffer is specified by *inbuf and it contains *inbytesleft bytes. The extra indirection is necessary for communicating the used input back to the caller (see below). It is important to note that the buffer pointer is of type char and the length is measured in bytes even if the input text is encoded in wide characters.

The output buffer is specified in a similar way. *outbuf points to the beginning of the buffer with at least *outbytesleft bytes room for the result. The buffer pointer again is of type char and the length is measured in bytes. If outbuf or *outbuf is a null pointer the conversion is performed but no output is available.

If inbuf is a null pointer the iconv function performs the necessary action to put the state of the conversion into the initial state. This is obviously a no-op for non-stateful encodings, but if the encoding has a state such a function call might put some byte sequences in the output buffer which perform the necessary state changes. The next call with inbuf not being a null pointer then simply goes on from the initial state. It is important that the programmer never makes any assumption on whether the conversion has to deal with states or not. Even if the input and output character sets are not stateful the implementation might still have to keep states. This is due to the implementation chosen for the GNU C library as it is described below. Therefore an iconv call to reset the state should always be performed if some protocol requires this for the output text.

The conversion stops for three reasons. The first is that all characters from the input buffer are converted. This actually can mean two things: really all bytes from the input buffer are consumed or there are some bytes at the end of the buffer which possibly can form a complete character but the input is incomplete. The second reason for a stop is when the output buffer is full. And the third reason is that the input contains invalid characters.

In all these cases the buffer pointers after the last successful conversion, for input and output buffer, are stored in inbuf and outbuf and the available room in each buffer is stored in inbytesleft and outbytesleft.

Since the character sets selected in the iconv_open call can be almost arbitrary there can be situations where the input buffer contains valid characters which have no identical representation in the output character set. The behavior in this situation is undefined. The current behavior of the GNU C library in this situation is to return with an error immediately. This certainly is not the most desirable solution. Therefore future versions will provide better ones but they are not yet finished.

If all input from the input buffer is successfully converted and stored in the output buffer the function returns the number of non-reversible conversions performed. In all other cases the return value is (size_t) -1 and errno is set appropriately. In this case the value pointed to by inbytesleft is nonzero.

EILSEQ
The conversion stopped because of an invalid byte sequence in the input. After the call *inbuf points at the first byte of the invalid byte sequence.
E2BIG
The conversion stopped because it ran out of space in the output buffer.
EINVAL
The conversion stopped because of an incomplete byte sequence at the end of the input buffer.
EBADF
The cd argument is invalid.

This function was introduced in the XPG2 standard and is declared in the `iconv.h' header.

The definition of the iconv function is quite good overall. It provides quite flexible functionality. The only problems lie in the boundary cases which are incomplete byte sequences at the end of the input buffer and invalid input. A third problem, which is not really a design problem, is the way conversions are selected. The standard does not say anything about the legitimate names, a minimal set of available conversions. We will see how this negatively impacts other implementations, as is demonstrated below.

A complete iconv example

The example below features a solution for a common problem. Given that one knows the internal encoding used by the system for wchar_t strings one often is in the position to read text from a file and store it in wide character buffers. One can do this using mbsrtowcs but then we run into the problems discussed above.

int
file2wcs (int fd, const char *charset, wchar_t *outbuf, size_t avail)
{
  char inbuf[BUFSIZ];
  size_t insize = 0;
  char *wrptr = (char *) outbuf;
  int result = 0;
  iconv_t cd;

  cd = iconv_open ("WCHAR_T", charset);
  if (cd == (iconv_t) -1)
    {
      /* Something went wrong.  */
      if (errno == EINVAL)
        error (0, 0, "conversion from '%s' to wchar_t not available",
               charset);
      else
        perror ("iconv_open");

      /* Terminate the output string.  */
      *outbuf = L'\0';

      return -1;
    }

  while (avail > 0)
    {
      size_t nread;
      size_t nconv;
      char *inptr = inbuf;

      /* Read more input.  */
      nread = read (fd, inbuf + insize, sizeof (inbuf) - insize);
      if (nread == 0)
        {
          /* When we come here the file is completely read.
             This still could mean there are some unused
             characters in the inbuf.  Put them back.  */
          if (lseek (fd, -insize, SEEK_CUR) == -1)
            result = -1;

          /* Now write out the byte sequence to get into the
             initial state if this is necessary.  */
          iconv (cd, NULL, NULL, &wrptr, &avail);

          break;
        }
      insize += nread;

      /* Do the conversion.  */
      nconv = iconv (cd, &inptr, &insize, &wrptr, &avail);
      if (nconv == (size_t) -1)
        {
          /* Not everything went right.  It might only be
             an unfinished byte sequence at the end of the
             buffer.  Or it is a real problem.  */
          if (errno == EINVAL)
            /* This is harmless.  Simply move the unused
               bytes to the beginning of the buffer so that
               they can be used in the next round.  */
            memmove (inbuf, inptr, insize);
          else
            {
              /* It is a real problem.  Maybe we ran out of
                 space in the output buffer or we have invalid
                 input.  In any case back the file pointer to
                 the position of the last processed byte.  */
              lseek (fd, -insize, SEEK_CUR);
              result = -1;
              break;
            }
        }
    }

  /* Terminate the output string.  */
  if (avail >= sizeof (wchar_t))
    *((wchar_t *) wrptr) = L'\0';

  if (iconv_close (cd) != 0)
    perror ("iconv_close");

  return (wchar_t *) wrptr - outbuf;
}

This example shows the most important aspects of using the iconv functions. It shows how successive calls to iconv can be used to convert large amounts of text. The user does not have to care about stateful encodings as the functions take care of everything.

An interesting point is the case where iconv return an error and errno is set to EINVAL. This is not really an error in the transformation. It can happen whenever the input character set contains byte sequences of more than one byte for some character and texts are not processed in one piece. In this case there is a chance that a multibyte sequence is cut. The caller than can simply read the remainder of the takes and feed the offending bytes together with new character from the input to iconv and continue the work. The internal state kept in the descriptor is not unspecified after such an event as it is the case with the conversion functions from the ISO C standard.

The example also shows the problem of using wide character strings with iconv. As explained in the description of the iconv function above the function always takes a pointer to a char array and the available space is measured in bytes. In the example the output buffer is a wide character buffer. Therefore we use a local variable wrptr of type char * which is used in the iconv calls.

This looks rather innocent but can lead to problems on platforms which have tight restriction on alignment. Therefore the caller of iconv has to make sure that the pointers passed are suitable for access of characters from the appropriate character set. Since in the above case the input parameter to the function is a wchar_t pointer this is the case (unless the user violates alignment when computing the parameter). But in other situations, especially when writing generic functions where one does not know what type of character set one uses and therefore treats text as a sequence of bytes, it might become tricky.

Some Details about other iconv Implementations

This is not really the place to discuss the iconv implementation of other systems but it is necessary to know a bit about them to write portable programs. The above mentioned problems with the specification of the iconv functions can lead to portability issues.

The first thing to notice is that due to the large number of character sets in use it is certainly not practical to encode the conversions directly in the C library. Therefore the conversion information must come from files outside the C library. This is usually done in one or both of the following ways:

Some implementations in commercial Unices implement a mixture of these these possibilities, the majority only the second solution. Using loadable modules moves the code out of the library itself and keeps the door open for extensions and improvements. But this design is also limiting on some platforms since not many platforms support dynamic loading in statically linked programs. On platforms without his capability it is therefore not possible to use this interface in statically linked programs. The GNU C library has on ELF platforms no problems with dynamic loading in in these situations and therefore this point is moot. The danger is that one gets acquainted with this and forgets about the restrictions on other systems.

A second thing to know about other iconv implementations is that the number of available conversions is often very limited. Some implementations provide in the standard release (not special international or developer releases) at most 100 to 200 conversion possibilities. This does not mean 200 different character sets are supported. E.g., conversions from one character set to a set of, say, 10 others counts as 10 conversion. Together with the other direction this makes already 20. One can imagine the thin coverage these platform provide. Some Unix vendors even provide only a handful of conversions which renders them useless for almost all uses.

This directly leads to a third and probably the most problematic point. The way the iconv conversion functions are implemented on all known Unix system and the availability of the conversion functions from character set @math{@cal{A}} to @math{@cal{B}} and the conversion from @math{@cal{B}} to @math{@cal{C}} does not imply that the conversion from @math{@cal{A}} to @math{@cal{C}} is available.

This might not seem unreasonable and problematic at first but it is a quite big problem as one will notice shortly after hitting it. To show the problem we assume to write a program which has to convert from @math{@cal{A}} to @math{@cal{C}}. A call like

cd = iconv_open ("@math{@cal{C}}", "@math{@cal{A}}");

does fail according to the assumption above. But what does the program do now? The conversion is really necessary and therefore simply giving up is no possibility.

This is a nuisance. The iconv function should take care of this. But how should the program proceed from here on? If it would try to convert to character set @math{@cal{B}} first the two iconv_open calls

cd1 = iconv_open ("@math{@cal{B}}", "@math{@cal{A}}");

and

cd2 = iconv_open ("@math{@cal{C}}", "@math{@cal{B}}");

will succeed but how to find @math{@cal{B}}?

Unfortunately, the answer is: there is no general solution. On some systems guessing might help. On those systems most character sets can convert to and from UTF-8 encoded ISO 10646 or Unicode text. Beside this only some very system-specific methods can help. Since the conversion functions come from loadable modules and these modules must be stored somewhere in the filesystem, one could try to find them and determine from the available file which conversions are available and whether there is an indirect route from @math{@cal{A}} to @math{@cal{C}}.

This shows one of the design errors of iconv mentioned above. It should at least be possible to determine the list of available conversion programmatically so that if iconv_open says there is no such conversion, one could make sure this also is true for indirect routes.

The iconv Implementation in the GNU C library

After reading about the problems of iconv implementations in the last section it is certainly good to note that the implementation in the GNU C library has none of the problems mentioned above. What follows is a step-by-step analysis of the points raised above. The evaluation is based on the current state of the development (as of January 1999). The development of the iconv functions is not complete, but basic functionality has solidified.

The GNU C library's iconv implementation uses shared loadable modules to implement the conversions. A very small number of conversions are built into the library itself but these are only rather trivial conversions.

All the benefits of loadable modules are available in the GNU C library implementation. This is especially appealing since the interface is well documented (see below) and it therefore is easy to write new conversion modules. The drawback of using loadable objects is not a problem in the GNU C library, at least on ELF systems. Since the library is able to load shared objects even in statically linked binaries this means that static linking needs not to be forbidden in case one wants to use iconv.

The second mentioned problem is the number of supported conversions. Currently, the GNU C library supports more than 150 character sets. The way the implementation is designed the number of supported conversions is greater than 22350 (@math{150} times @math{149}). If any conversion from or to a character set is missing it can easily be added.

Particularly impressive as it may be, this high number is due to the fact that the GNU C library implementation of iconv does not have the third problem mentioned above. I.e., whenever there is a conversion from a character set @math{@cal{A}} to @math{@cal{B}} and from @math{@cal{B}} to @math{@cal{C}} it is always possible to convert from @math{@cal{A}} to @math{@cal{C}} directly. If the iconv_open returns an error and sets errno to EINVAL this really means there is no known way, directly or indirectly, to perform the wanted conversion.

This is achieved by providing for each character set a conversion from and to UCS-4 encoded ISO 10646. Using ISO 10646 as an intermediate representation it is possible to triangulate, i.e., converting with an intermediate representation.

There is no inherent requirement to provide a conversion to ISO 10646 for a new character set and it is also possible to provide other conversions where neither source nor destination character set is ISO 10646. The currently existing set of conversions is simply meant to cover all conversions which might be of interest.

All currently available conversions use the triangulation method above, making conversion run unnecessarily slow. If, e.g., somebody often needs the conversion from ISO-2022-JP to EUC-JP, a quicker solution would involve direct conversion between the two character sets, skipping the input to ISO 10646 first. The two character sets of interest are much more similar to each other than to ISO 10646.

In such a situation one can easy write a new conversion and provide it as a better alternative. The GNU C library iconv implementation would automatically use the module implementing the conversion if it is specified to be more efficient.

Format of `gconv-modules' files

All information about the available conversions comes from a file named `gconv-modules' which can be found in any of the directories along the GCONV_PATH. The `gconv-modules' files are line-oriented text files, where each of the lines has one of the following formats:

Returning to the example above where one has written a module to directly convert from ISO-2022-JP to EUC-JP and back. All what has to be done is to put the new module, be its name ISO2022JP-EUCJP.so, in a directory and add a file `gconv-modules' with the following content in the same directory:

module  ISO-2022-JP//   EUC-JP//        ISO2022JP-EUCJP    1
module  EUC-JP//        ISO-2022-JP//   ISO2022JP-EUCJP    1

To see why this is sufficient, it is necessary to understand how the conversion used by iconv (and described in the descriptor) is selected. The approach to this problem is quite simple.

At the first call of the iconv_open function the program reads all available `gconv-modules' files and builds up two tables: one containing all the known aliases and another which contains the information about the conversions and which shared object implements them.

Finding the conversion path in iconv

The set of available conversions form a directed graph with weighted edges. The weights on the edges are the costs specified in the `gconv-modules' files. The iconv_open function uses an algorithm suitable for search for the best path in such a graph and so constructs a list of conversions which must be performed in succession to get the transformation from the source to the destination character set.

Explaining why the above `gconv-modules' files allows the iconv implementation to resolve the specific ISO-2022-JP to EUC-JP conversion module instead of the conversion coming with the library itself is straightforward. Since the latter conversion takes two steps (from ISO-2022-JP to ISO 10646 and then from ISO 10646 to EUC-JP) the cost is @math{1+1 = 2}. But the above `gconv-modules' file specifies that the new conversion modules can perform this conversion with only the cost of @math{1}.

A mysterious piece about the `gconv-modules' file above (and also the file coming with the GNU C library) are the names of the character sets specified in the module lines. Why do almost all the names end in //? And this is not all: the names can actually be regular expressions. At this point of time this mystery should not be revealed, unless you have the relevant spell-casting materials: ashes from an original DOS 6.2 boot disk burnt in effigy, a crucifix blessed by St. Emacs, assorted herbal roots from Central America, sand from Cebu, etc. Sorry! The part of the implementation where this is used is not yet finished. For now please simply follow the existing examples. It'll become clearer once it is. --drepper

A last remark about the `gconv-modules' is about the names not ending with //. There often is a character set named INTERNAL mentioned. From the discussion above and the chosen name it should have become clear that this is the name for the representation used in the intermediate step of the triangulation. We have said that this is UCS-4 but actually it is not quite right. The UCS-4 specification also includes the specification of the byte ordering used. Since a UCS-4 value consists of four bytes a stored value is effected by byte ordering. The internal representation is not the same as UCS-4 in case the byte ordering of the processor (or at least the running process) is not the same as the one required for UCS-4. This is done for performance reasons as one does not want to perform unnecessary byte-swapping operations if one is not interested in actually seeing the result in UCS-4. To avoid trouble with endianess the internal representation consistently is named INTERNAL even on big-endian systems where the representations are identical.

iconv module data structures

So far this section described how modules are located and considered to be used. What remains to be described is the interface of the modules so that one can write new ones. This section describes the interface as it is in use in January 1999. The interface will change in future a bit but hopefully only in an upward compatible way.

The definitions necessary to write new modules are publicly available in the non-standard header `gconv.h'. The following text will therefore describe the definitions from this header file. But first it is necessary to get an overview.

From the perspective of the user of iconv the interface is quite simple: the iconv_open function returns a handle which can be used in calls to iconv and finally the handle is freed with a call to iconv_close. The problem is: the handle has to be able to represent the possibly long sequences of conversion steps and also the state of each conversion since the handle is all which is passed to the iconv function. Therefore the data structures are really the elements to understanding the implementation.

We need two different kinds of data structures. The first describes the conversion and the second describes the state etc. There are really two type definitions like this in `gconv.h'.

Data type: struct __gconv_step
This data structure describes one conversion a module can perform. For each function in a loaded module with conversion functions there is exactly one object of this type. This object is shared by all users of the conversion. I.e., this object does not contain any information corresponding to an actual conversion. It only describes the conversion itself.

struct __gconv_loaded_object *__shlib_handle
const char *__modname
int __counter
All these elements of the structure are used internally in the C library to coordinate loading and unloading the shared. One must not expect any of the other elements be available or initialized.
const char *__from_name
const char *__to_name
__from_name and __to_name contain the names of the source and destination character sets. They can be used to identify the actual conversion to be carried out since one module might implement conversions for more than one character set and/or direction.
gconv_fct __fct
gconv_init_fct __init_fct
gconv_end_fct __end_fct
These elements contain pointers to the functions in the loadable module. The interface will be explained below.
int __min_needed_from
int __max_needed_from
int __min_needed_to
int __max_needed_to;
These values have to be filled in the init function of the module. The __min_needed_from value specifies how many bytes a character of the source character set at least needs. The __max_needed_from specifies the maximum value which also includes possible shift sequences. The __min_needed_to and __max_needed_to values serve the same purpose but this time for the destination character set. It is crucial that these values are accurate since otherwise the conversion functions will have problems or not work at all.
int __stateful
This element must also be initialized by the init function. It is nonzero if the source character set is stateful. Otherwise it is zero.
void *__data
This element can be used freely by the conversion functions in the module. It can be used to communicate extra information from one call to another. It need not be initialized if not needed at all. If this element gets assigned a pointer to dynamically allocated memory (presumably in the init function) it has to be made sure that the end function deallocates the memory. Otherwise the application will leak memory. It is important to be aware that this data structure is shared by all users of this specification conversion and therefore the __data element must not contain data specific to one specific use of the conversion function.

Data type: struct __gconv_step_data
This is the data structure which contains the information specific to each use of the conversion functions.

char *__outbuf
char *__outbufend
These elements specify the output buffer for the conversion step. The __outbuf element points to the beginning of the buffer and __outbufend points to the byte following the last byte in the buffer. The conversion function must not assume anything about the size of the buffer but it can be safely assumed the there is room for at least one complete character in the output buffer. Once the conversion is finished and the conversion is the last step the __outbuf element must be modified to point after last last byte written into the buffer to signal how much output is available. If this conversion step is not the last one the element must not be modified. The __outbufend element must not be modified.
int __is_last
This element is nonzero if this conversion step is the last one. This information is necessary for the recursion. See the description of the conversion function internals below. This element must never be modified.
int __invocation_counter
The conversion function can use this element to see how many calls of the conversion function already happened. Some character sets require when generating output a certain prolog and by comparing this value with zero one can find out whether it is the first call and therefore the prolog should be emitted or not. This element must never be modified.
int __internal_use
This element is another one rarely used but needed in certain situations. It got assigned a nonzero value in case the conversion functions are used to implement mbsrtowcs et.al. I.e., the function is not used directly through the iconv interface. This sometimes makes a difference as it is expected that the iconv functions are used to translate entire texts while the mbsrtowcs functions are normally only used to convert single strings and might be used multiple times to convert entire texts. But in this situation we would have problem complying with some rules of the character set specification. Some character sets require a prolog which must appear exactly once for an entire text. If a number of mbsrtowcs calls are used to convert the text only the first call must add the prolog. But since there is no communication between the different calls of mbsrtowcs the conversion functions have no possibility to find this out. The situation is different for sequences of iconv calls since the handle allows access to the needed information. This element is mostly used together with __invocation_counter in a way like this:
if (!data->__internal_use
     && data->__invocation_counter == 0)
  /* Emit prolog.  */
  ...
This element must never be modified.
mbstate_t *__statep
The __statep element points to an object of type mbstate_t (see section Representing the state of the conversion). The conversion of an stateful character set must use the object pointed to by this element to store information about the conversion state. The __statep element itself must never be modified.
mbstate_t __state
This element never must be used directly. It is only part of this structure to have the needed space allocated.

iconv module interfaces

With the knowledge about the data structures we now can describe the conversion functions itself. To understand the interface a bit of knowledge about the functionality in the C library which loads the objects with the conversions is necessary.

It is often the case that one conversion is used more than once. I.e., there are several iconv_open calls for the same set of character sets during one program run. The mbsrtowcs et.al. functions in the GNU C library also use the iconv functionality which increases the number of uses of the same functions even more.

For this reason the modules do not get loaded exclusively for one conversion. Instead a module once loaded can be used by arbitrarily many iconv or mbsrtowcs calls at the same time. The splitting of the information between conversion function specific information and conversion data makes this possible. The last section showed the two data structures used to do this.

This is of course also reflected in the interface and semantics of the functions the modules must provide. There are three functions which must have the following names:

gconv_init
The gconv_init function initializes the conversion function specific data structure. This very same object is shared by all conversion which use this conversion and therefore no state information about the conversion itself must be stored in here. If a module implements more than one conversion the gconv_init function will be called multiple times.
gconv_end
The gconv_end function is responsible to free all resources allocated by the gconv_init function. If there is nothing to do this function can be missing. Special care must be taken if the module implements more than one conversion and the gconv_init function does not allocate the same resources for all conversions.
gconv
This is the actual conversion function. It is called to convert one block of text. It gets passed the conversion step information initialized by gconv_init and the conversion data, specific to this use of the conversion functions.

There are three data types defined for the three module interface function and these define the interface.

Data type: int (*__gconv_init_fct) (struct __gconv_step *)
This specifies the interface of the initialization function of the module. It is called exactly once for each conversion the module implements.

As explained int the description of the struct __gconv_step data structure above the initialization function has to initialize parts of it.

__min_needed_from
__max_needed_from
__min_needed_to
__max_needed_to
These elements must be initialized to the exact numbers of the minimum and maximum number of bytes used by one character in the source and destination character set respectively. If the characters all have the same size the minimum and maximum values are the same.
__stateful
This element must be initialized to an nonzero value if the source character set is stateful. Otherwise it must be zero.

If the initialization function needs to communication some information to the conversion function this can happen using the __data element of the __gconv_step structure. But since this data is shared by all the conversion is must not be modified by the conversion function. How this can be used is shown in the example below.

#define MIN_NEEDED_FROM         1
#define MAX_NEEDED_FROM         4
#define MIN_NEEDED_TO           4
#define MAX_NEEDED_TO           4

int
gconv_init (struct __gconv_step *step)
{
  /* Determine which direction.  */
  struct iso2022jp_data *new_data;
  enum direction dir = illegal_dir;
  enum variant var = illegal_var;
  int result;

  if (__strcasecmp (step->__from_name, "ISO-2022-JP//") == 0)
    {
      dir = from_iso2022jp;
      var = iso2022jp;
    }
  else if (__strcasecmp (step->__to_name, "ISO-2022-JP//") == 0)
    {
      dir = to_iso2022jp;
      var = iso2022jp;
    }
  else if (__strcasecmp (step->__from_name, "ISO-2022-JP-2//") == 0)
    {
      dir = from_iso2022jp;
      var = iso2022jp2;
    }
  else if (__strcasecmp (step->__to_name, "ISO-2022-JP-2//") == 0)
    {
      dir = to_iso2022jp;
      var = iso2022jp2;
    }

  result = __GCONV_NOCONV;
  if (dir != illegal_dir)
    {
      new_data = (struct iso2022jp_data *)
        malloc (sizeof (struct iso2022jp_data));

      result = __GCONV_NOMEM;
      if (new_data != NULL)
        {
          new_data->dir = dir;
          new_data->var = var;
          step->__data = new_data;

          if (dir == from_iso2022jp)
	    {
              step->__min_needed_from = MIN_NEEDED_FROM;
              step->__max_needed_from = MAX_NEEDED_FROM;
              step->__min_needed_to = MIN_NEEDED_TO;
              step->__max_needed_to = MAX_NEEDED_TO;
	    }
          else
            {
              step->__min_needed_from = MIN_NEEDED_TO;
              step->__max_needed_from = MAX_NEEDED_TO;
              step->__min_needed_to = MIN_NEEDED_FROM;
              step->__max_needed_to = MAX_NEEDED_FROM + 2;
            }

          /* Yes, this is a stateful encoding.  */
          step->__stateful = 1;

          result = __GCONV_OK;
        }
    }

  return result;
}

The function first checks which conversion is wanted. The module from which this function is taken implements four different conversion and which one is selected can be determined by comparing the names. The comparison should always be done without paying attention to the case.

Then a data structure is allocated which contains the necessary information about which conversion is selected. The data structure struct iso2022jp_data is locally defined since outside the module this data is not used at all. Please note that if all four conversions this modules supports are requested there are four data blocks.

One interesting thing is the initialization of the __min_ and __max_ elements of the step data object. A single ISO-2022-JP character can consist of one to four bytes. Therefore the MIN_NEEDED_FROM and MAX_NEEDED_FROM macros are defined this way. The output is always the INTERNAL character set (aka UCS-4) and therefore each character consists of exactly four bytes. For the conversion from INTERNAL to ISO-2022-JP we have to take into account that escape sequences might be necessary to switch the character sets. Therefore the __max_needed_to element for this direction gets assigned MAX_NEEDED_FROM + 2. This takes into account the two bytes needed for the escape sequences to single the switching. The asymmetry in the maximum values for the two directions can be explained easily: when reading ISO-2022-JP text escape sequences can be handled alone. I.e., it is not necessary to process a real character since the effect of the escape sequence can be recorded in the state information. The situation is different for the other direction. Since it is in general not known which character comes next one cannot emit escape sequences to change the state in advance. This means the escape sequences which have to be emitted together with the next character. Therefore one needs more room then only for the character itself.

The possible return values of the initialization function are:

__GCONV_OK
The initialization succeeded
__GCONV_NOCONV
The requested conversion is not supported in the module. This can happen if the `gconv-modules' file has errors.
__GCONV_NOMEM
Memory required to store additional information could not be allocated.

The functions called before the module is unloaded is significantly easier. It often has nothing at all to do in which case it can be left out completely.

Data type: void (*__gconv_end_fct) (struct gconv_step *)
The task of this function is it to free all resources allocated in the initialization function. Therefore only the __data element of the object pointed to by the argument is of interest. Continuing the example from the initialization function, the finalization function looks like this:

void
gconv_end (struct __gconv_step *data)
{
  free (data->__data);
}

The most important function is the conversion function itself. It can get quite complicated for complex character sets. But since this is not of interest here we will only describe a possible skeleton for the conversion function.

Data type: int (*__gconv_fct) (struct __gconv_step *, struct __gconv_step_data *, const char **, const char *, size_t *, int)
The conversion function can be called for two basic reason: to convert text or to reset the state. From the description of the iconv function it can be seen why the flushing mode is necessary. What mode is selected is determined by the sixth argument, an integer. If it is nonzero it means that flushing is selected.

Common to both mode is where the output buffer can be found. The information about this buffer is stored in the conversion step data. A pointer to this is passed as the second argument to this function. The description of the struct __gconv_step_data structure has more information on this.

What has to be done for flushing depends on the source character set. If it is not stateful nothing has to be done. Otherwise the function has to emit a byte sequence to bring the state object in the initial state. Once this all happened the other conversion modules in the chain of conversions have to get the same chance. Whether another step follows can be determined from the __is_last element of the step data structure to which the first parameter points.

The more interesting mode is when actually text has to be converted. The first step in this case is to convert as much text as possible from the input buffer and store the result in the output buffer. The start of the input buffer is determined by the third argument which is a pointer to a pointer variable referencing the beginning of the buffer. The fourth argument is a pointer to the byte right after the last byte in the buffer.

The conversion has to be performed according to the current state if the character set is stateful. The state is stored in an object pointed to by the __statep element of the step data (second argument). Once either the input buffer is empty or the output buffer is full the conversion stops. At this point the pointer variable referenced by the third parameter must point to the byte following the last processed byte. I.e., if all of the input is consumed this pointer and the fourth parameter have the same value.

What now happens depends on whether this step is the last one or not. If it is the last step the only thing which has to be done is to update the __outbuf element of the step data structure to point after the last written byte. This gives the caller the information on how much text is available in the output buffer. Beside this the variable pointed to by the fifth parameter, which is of type size_t, must be incremented by the number of characters (not bytes) which were converted in a non-reversible way. Then the function can return.

In case the step is not the last one the later conversion functions have to get a chance to do their work. Therefore the appropriate conversion function has to be called. The information about the functions is stored in the conversion data structures, passed as the first parameter. This information and the step data are stored in arrays so the next element in both cases can be found by simple pointer arithmetic:

int
gconv (struct __gconv_step *step, struct __gconv_step_data *data,
       const char **inbuf, const char *inbufend, size_t *written,
       int do_flush)
{
  struct __gconv_step *next_step = step + 1;
  struct __gconv_step_data *next_data = data + 1;
  ...

The next_step pointer references the next step information and next_data the next data record. The call of the next function therefore will look similar to this:

  next_step->__fct (next_step, next_data, &outerr, outbuf,
                    written, 0)

But this is not yet all. Once the function call returns the conversion function might have some more to do. If the return value of the function is __GCONV_EMPTY_INPUT this means there is more room in the output buffer. Unless the input buffer is empty the conversion functions start all over again and processes the rest of the input buffer. If the return value is not __GCONV_EMPTY_INPUT something went wrong and we have to recover from this.

A requirement for the conversion function is that the input buffer pointer (the third argument) always points to the last character which was put in the converted form in the output buffer. This is trivially true after the conversion performed in the current step. But if the conversion functions deeper down the stream stop prematurely not all characters from the output buffer are consumed and therefore the input buffer pointers must be backed of to the right position.

This is easy to do if the input and output character sets have a fixed width for all characters. In this situation we can compute how many characters are left in the output buffer and therefore can correct the input buffer pointer appropriate with a similar computation. Things are getting tricky if either character set has character represented with variable length byte sequences and it gets even more complicated if the conversion has to take care of the state. In these cases the conversion has to be performed once again, from the known state before the initial conversion. I.e., if necessary the state of the conversion has to be reset and the conversion loop has to be executed again. The difference now is that it is known how much input must be created and the conversion can stop before converting the first unused character. Once this is done the input buffer pointers must be updated again and the function can return.

One final thing should be mentioned. If it is necessary for the conversion to know whether it is the first invocation (in case a prolog has to be emitted) the conversion function should just before returning to the caller increment the __invocation_counter element of the step data structure. See the description of the struct __gconv_step_data structure above for more information on how this can be used.

The return value must be one of the following values:

__GCONV_EMPTY_INPUT
All input was consumed and there is room left in the output buffer.
__GCONV_FULL_OUTPUT
No more room in the output buffer. In case this is not the last step this value is propagated down from the call of the next conversion function in the chain.
__GCONV_INCOMPLETE_INPUT
The input buffer is not entirely empty since it contains an incomplete character sequence.

The following example provides a framework for a conversion function. In case a new conversion has to be written the holes in this implementation have to be filled and that is it.

int
gconv (struct __gconv_step *step, struct __gconv_step_data *data,
       const char **inbuf, const char *inbufend, size_t *written,
       int do_flush)
{
  struct __gconv_step *next_step = step + 1;
  struct __gconv_step_data *next_data = data + 1;
  gconv_fct fct = next_step->__fct;
  int status;

  /* If the function is called with no input this means we have
     to reset to the initial state.  The possibly partly
     converted input is dropped.  */
  if (do_flush)
    {
      status = __GCONV_OK;

      /* Possible emit a byte sequence which put the state object
         into the initial state.  */

      /* Call the steps down the chain if there are any but only
         if we successfully emitted the escape sequence.  */
      if (status == __GCONV_OK && ! data->__is_last)
        status = fct (next_step, next_data, NULL, NULL,
                      written, 1);
    }
  else
    {
      /* We preserve the initial values of the pointer variables.  */
      const char *inptr = *inbuf;
      char *outbuf = data->__outbuf;
      char *outend = data->__outbufend;
      char *outptr;

      do
        {
          /* Remember the start value for this round.  */
          inptr = *inbuf;
          /* The outbuf buffer is empty.  */
          outptr = outbuf;

          /* For stateful encodings the state must be safe here.  */

          /* Run the conversion loop.  status is set
             appropriately afterwards.  */

          /* If this is the last step leave the loop, there is
             nothing we can do.  */
          if (data->__is_last)
            {
              /* Store information about how many bytes are
                 available.  */
              data->__outbuf = outbuf;

             /* If any non-reversible conversions were performed,
                add the number to *written.  */

             break;
           }

          /* Write out all output which was produced.  */
          if (outbuf > outptr)
            {
              const char *outerr = data->__outbuf;
              int result;

              result = fct (next_step, next_data, &outerr,
                            outbuf, written, 0);

              if (result != __GCONV_EMPTY_INPUT)
                {
                  if (outerr != outbuf)
                    {
                      /* Reset the input buffer pointer.  We
                         document here the complex case.  */
                      size_t nstatus;

                      /* Reload the pointers.  */
                      *inbuf = inptr;
                      outbuf = outptr;

                      /* Possibly reset the state.  */

                      /* Redo the conversion, but this time
                         the end of the output buffer is at
                         outerr.  */
                    }

                  /* Change the status.  */
                  status = result;
                }
              else
                /* All the output is consumed, we can make
                    another run if everything was ok.  */
                if (status == __GCONV_FULL_OUTPUT)
                  status = __GCONV_OK;
           }
        }
      while (status == __GCONV_OK);

      /* We finished one use of this step.  */
      ++data->__invocation_counter;
    }

  return status;
}

This information should be sufficient to write new modules. Anybody doing so should also take a look at the available source code in the GNU C library sources. It contains many examples of working and optimized modules.


Go to the first, previous, next, last section, table of contents.