Efficient way to calculate byte length of a character, depending on the encoding

Posted by BalusC on Stack Overflow See other posts from Stack Overflow or by BalusC
Published on 2010-04-28T00:27:18Z Indexed on 2010/04/28 0:33 UTC
Read the original article Hit count: 373

What's the most efficient way to calculate the byte length of a character, taking the character encoding into account? In UTF-8 for example the characters have a variable byte length, so each character needs to be determined individually. As far now I've come up with this:

char c = getItSomehow();
String encoding = "UTF-8";

int length = new String(new char[] { c }).getBytes(encoding).length;

But this is clumsy and inefficient in a loop since a new String needs to be created everytime. I can't find other and more efficient ways in the Java API. I imagine that this can be done with bitwise operations like bit shifting, but that's my weak point and I'm unsure how to take the encoding into account here :)

If you question the need for this, check this topic.

© Stack Overflow or respective owner

Related posts about java

Related posts about character