Search Results

Search found 18 results on 1 pages for 'unicodestring'.

Page 1/1 | 1 

  • Storing UTF8 string in a UnicodeString

    - by Mick
    In Delphi 2007 you can store a UTF8 string in a WideString and then pass that onto a Win32 function, e.g. var UnicodeStr: WideString; UTF8Str: WideString; begin UnicodeStr:='some unicode text'; UTF8Str:=UTF8Encode(UnicodeStr); Windows.SomeFunction(PWideChar(UTF8Str), ...) end; Delphi 2007 does not interfere with the contents of UTF8Str, i.e. it is left as a UTF8 encoded string stored in a WideString. But in Delphi 2010 I'm struggling to find a way to do the same thing, i.e. store a UTF8 encoded string in a WideString without it being automatically converted from UTF8. I cannot pass a pointer to UTF8 string (or RawByteString), e.g. the following will obviously not work: var UnicodeStr: WideString; UTF8Str: UTF8String; begin UnicodeStr:='some unicode text'; UTF8Str:=UTF8Encode(UnicodeStr); Windows.SomeFunction(PWideChar(UTF8Str), ...) end; Any help appreciated. Thanks.

    Read the article

  • Converting UnicodeString to PAnsiChar in Delphi XE

    - by moodforaday
    In Delphi XE I am using the BASS audio library, which contains this function: function BASS_StreamCreateURL(url: PAnsiChar; offset: DWORD; flags: DWORD; proc: DOWNLOADPROC; user: Pointer):HSTREAM; stdcall; external bassdll; The 'url' parameter is of type PAnsiChar, so in my code I do a cast: FStreamHandle := BASS_StreamCreateURL(PAnsiChar( url ) [...] The compiler emits a warning on this line: "suspicious typecast of string to PAnsiChar". In trying to eliminate the warning, I found that the recommended way is to use a double cast: FStreamHandle := BASS_StreamCreateURL(PAnsiChar( AnsiString( url )) [...] This does eliminate the warning, but the BASS function now returns error code 2 ("cannot open file"), which tells me the URL string it receives is somehow broken. I cannot see what the bass DLL actually receives, but using a breakpoint in the debugger the string looks good: var s : PAnsiChar; begin s := PAnsiChar( AnsiString( url )); At this point string s appears fine, but the BASS function fails when I pass it. My initial code: PAnsiChar( url ) works well with BASS, but emits a warning. So what's the correct way of getting from UnicodeString to PAnsiChar without a warning?

    Read the article

  • Instruments (Leaks) and NSDateFormatter

    - by Cal
    When I run my iPhone app with Instruments Leaks and parse a bunch of NSDates using NSDateFormatter my memory goes up about 1mb and stays even though these NSDates should be dealloc'd after the parsing (I just discard them if they aren't new). I thought the malloc (in my heaviest stack trace below) could become part of the NSDate but I also thought it could be memory that only used during some intermediate step in parsing. Does anyone know which one it is or how to find out? Also, is there a way to put a breakpoint on NSDate dealloc to see if that memory is really being reclaimed? Here's what my date formatter looks like for parsing these dates: df = [[NSDateFormatter alloc] init]; [df setDateFormat:@"EEE, d MMM yyyy H:m:s z"]; Here's the Heaviest Stack trace when the memory bumps up and stays there: 0 libSystem.B.dylib 208.80 Kb malloc 1 libicucore.A.dylib 868.19 Kb icu::ZoneMeta::getSingleCountry(icu::UnicodeString const&, icu::UnicodeString&) 2 libicucore.A.dylib 868.66 Kb icu::ZoneMeta::getSingleCountry(icu::UnicodeString const&, icu::UnicodeString&) 3 libicucore.A.dylib 868.67 Kb icu::ZoneMeta::getSingleCountry(icu::UnicodeString const&, icu::UnicodeString&) 4 libicucore.A.dylib 868.67 Kb icu::DateFormatSymbols::initZoneStringFormat() 5 libicucore.A.dylib 868.67 Kb icu::DateFormatSymbols::getZoneStringFormat() const 6 libicucore.A.dylib 868.67 Kb icu::SimpleDateFormat::subParse(icu::UnicodeString const&, int&, unsigned short, int, signed char, signed char, signed char*, icu::Calendar&) const 7 libicucore.A.dylib 868.67 Kb icu::SimpleDateFormat::parse(icu::UnicodeString const&, icu::Calendar&, icu::ParsePosition&) const 8 libicucore.A.dylib 868.67 Kb icu::DateFormat::parse(icu::UnicodeString const&, icu::ParsePosition&) const 9 libicucore.A.dylib 868.67 Kb udat_parse 10 CoreFoundation 868.67 Kb CFDateFormatterGetAbsoluteTimeFromString 11 CoreFoundation 868.67 Kb CFDateFormatterCreateDateFromString 12 Foundation 868.67 Kb -[NSDateFormatter getObjectValue:forString:range:error:] 13 Foundation 868.75 Kb -[NSDateFormatter getObjectValue:forString:errorDescription:] 14 Foundation 868.75 Kb -[NSDateFormatter dateFromString:] Thanks!

    Read the article

  • Code to strip diacritical marks using ICU

    - by Paul J. Lucas
    Can somebody please provide some sample code to strip diacritical marks (i.e., replace characters having accents, umlauts, etc., with their unaccented, unumlauted, etc., character equivalents, e.g., every accented é would become a plain ASCII e) from a UnicodeString using the ICU library in C++? E.g.: UnicodeString strip_diacritics( UnicodeString const &s ) { UnicodeString result; // ... return result; } Assume that s has already been normalized. Thanks.

    Read the article

  • C#: How to print a unicode string to console?

    - by Lopper
    How do I print out the value of a unicode String in C# to the console? byte[] unicodeBytes = new byte[] {0x61, 0x70, 0x70, 0x6C, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6F, 0x6E, 0x2F, 0x70, 0x63, 0x61, 0x70}; string unicodeString = Encoding.Unicode.GetString(unicodeBytes); Console.WriteLine(unicodeString); What I get for the above is "?????????" However, I see the following in the autos and local window when in debug mode for the value of unicodeString which is what I wanted to display. "??????????" How do I print out the correct result to the console as what the autos and local window for debugging demonstrated?

    Read the article

  • C++ conversion operator between types in other libraries

    - by Dave
    For convenience, I'd like to be able to cast between two types defined in other libraries. (Specifically, QString from the Qt library and UnicodeString from the ICU library.) Right now, I have created utility functions in a project namespace: namespace MyProject { const icu_44::UnicodeString ToUnicodeString(const QString& value); const QString ToQString(const icu_44::UnicodeString& value); } That's all well and good, but I'm wondering if there's a more elegant way. Ideally, I'd like to be able to convert between them using a cast operator. I do, however, want to retain the explicit nature of the conversion. An implicit conversion should not be possible. Is there a more elegant way to achieve this without modifying the source code of the libraries? Some operator overload syntax, perhaps?

    Read the article

  • Convert Virtual Key Code to unicode string

    - by Joshua Weinberg
    I have some code I've been using to get the current keyboard layout and convert a virtual key code into a string. This works great in most situations, but I'm having trouble with some specific cases. The one that brought this to light is the accent key next to the backspace key on german QWERTZ keyboards. http://en.wikipedia.org/wiki/File:KB_Germany.svg That key generates the VK code I'd expect kVK_ANSI_Equal but when using a QWERTZ keyboard layout I get no description back. Its ending up as a dead key because its supposed to be composed with another key. Is there any way to catch these cases and do the proper conversion? My current code is below. TISInputSourceRef currentKeyboard = TISCopyCurrentKeyboardInputSource(); CFDataRef uchr = (CFDataRef)TISGetInputSourceProperty(currentKeyboard, kTISPropertyUnicodeKeyLayoutData); const UCKeyboardLayout *keyboardLayout = (const UCKeyboardLayout*)CFDataGetBytePtr(uchr); if(keyboardLayout) { UInt32 deadKeyState = 0; UniCharCount maxStringLength = 255; UniCharCount actualStringLength = 0; UniChar unicodeString[maxStringLength]; OSStatus status = UCKeyTranslate(keyboardLayout, keyCode, kUCKeyActionDown, 0, LMGetKbdType(), kUCKeyTranslateNoDeadKeysBit, &deadKeyState, maxStringLength, &actualStringLength, unicodeString); if(actualStringLength > 0 && status == noErr) return [[NSString stringWithCharacters:unicodeString length:(NSInteger)actualStringLength] uppercaseString]; }

    Read the article

  • problems with zend_pdf and right to left language and unicode?

    - by user1400
    hi all, i am using zend_pdf to create pdf files for Ritgh to left language as arabic or east asia langauges i use of this code $pdfDoc = new Zend_Pdf(); $pdfPage = $pdfDoc->newPage(Zend_Pdf_Page::SIZE_A4); $font = Zend_Pdf_Font::fontWithPath(APPLICATION_PATH.'/Fonts/arial.ttf'); $pdfPage->setFont($font, 36); $pdfDoc->pages[] = $pdfPage; $unicodeString = '????'; $pdfPage->drawText($unicodeString, 72, 720, 'UTF-8'); $pdfDoc->save('utf8.pdf'); but there are two problem 1- how can i change document's direction to right to left ? 2- characters are separated one by one. but in some laguages characters of a word are joint together. any idea? or i may use tcpdf? thanks

    Read the article

  • c# unicode string output

    - by Reg
    I have function to convert string to a Unicode string: private string UnicodeString(string text) { return Encoding.UTF8.GetString(Encoding.ASCII.GetBytes(text)); } But when I am calling this function the output result is wrong. It looks like my function is not working. Console.WriteLine(UnicodeString("????? ?????")) printing on console just questions like that: ????? ???? Is there any way to say to console to display it correct? UPDATE Looks like the problem not in Unicode, I think may be it is displaying question marks because i am not having correct locale in the system (Windows 7)? Is there any way to make it work without changing locale?

    Read the article

  • How do I convert jstring to wchar_t *

    - by Obediah Stane
    Let's say that on the C++ side my function takes a variable of type jstring named myString. I can convert it to an ANSI string as follows: const char* ansiString = env-GetStringUTFChars(myString, 0); is there a way of getting const wchar_t* unicodeString = ...

    Read the article

  • Delphi Unicode String Type Stored Directly at its Address

    - by Andreas Rejbrand
    I want a string type that is Unicode and that stores the string directly at the adress of the variable, as is the case of the (Ansi-only) ShortString type. I mean, if I declare a S: ShortString and let S := 'My String', then, at @S, I will find the length of the string (as one byte, so the string cannot contain more than 255 characters) followed by the ANSI-encoded string itself. What I would like is a Unicode variant of this. That is, I want a string type such that, at @S, I will find a unsigned 32-bit integer containing the length of the string in bytes (or in characters, which is half the number of bytes) followed by the Unicode representation of the string. I have tried WideString, UnicodeString, and RawByteString, but they all appear only to store an adress at @S, and the actual string somewhere else (I guess this has do do with reference counting and such). I suspect that there is no built-in type to use, and that I have to come up with my own way of storing text the way I want (which actually is fun). Am I right?

    Read the article

  • How do I hide an inherited __published property in the derived class in a VCL component?

    - by Gary Benade
    I have created a new VCL component based on an existing VCL component. What I want to do now is set the Password and Username properties from an ini file instead of the property inspector. Robert Dunn Link I read on the delphi forum above you cannot unpublish a property and that the only workaround is to redeclare the property as read-only. I tried this but it all it does is make the property read only and grayed out in the object inspector. While this could work I would prefer if the property wasn't visible at all. __property System::UnicodeString Password = {read=FPassword}; Thanks in advance for any help or links to c++ VCL component writing tutorials. I am using CB2010

    Read the article

  • Unicode in fpc doesn't work

    - by user1546454
    Hi I'm Romanian and I can't write Unicode in Free Pascal Compiler. I try to write ?,î,â,a,? and it doesn't work. I tried with dos windows changed fonts, tried chcp. I even made a batch file which would do chcp 65001 and start the app. By default when I tried to write these letter I got "?" but when I started it with the batch file it just didn't write anything. I tried AnsiString, UnicodeString, UTF8String and all didn't work. And I think the problem is in the compiler. Does anyone know a solution?

    Read the article

  • How to parse kanji numeric characters using ICU?

    - by Aki
    I'm writing a function using ICU to parse an Unicode string which consists of kanji numeric character(s) and want to return the integer value of the string. "?" = 5 "???" = 31 "???????" = 5972 I'm setting the locale to Locale::getJapan() and using the NumberFormat::parse() to parse the character string. However, whenever I pass it any Kanji characters, the parse() method is returning U_INVALID_FORMAT_ERROR. Does anyone know if ICU supports Kanji character strings in the NumberFormat::parse() method? I was hoping that since I'm setting the Locale to Japanese that it would be able to parse Kanji numeric values. Thanks! #include <iostream> #include <unicode/numfmt.h> using namespace std; int main(int argc, char **argv) { const Locale &jaLocale = Locale::getJapan(); UErrorCode status = U_ZERO_ERROR; NumberFormat *nf = NumberFormat::createInstance(jaLocale, status); UChar number[] = {0x4E94}; // Character for '5' in Japanese '?' UnicodeString numStr(number); Formattable formattable; nf->parse(numStr, formattable, status); if (U_FAILURE(status)) { cout << "error parsing as number: " << u_errorName(status) << endl; return(1); } cout << "long value: " << formattable.getLong() << endl; }

    Read the article

  • Delphi Unicode String Type Stored Directly at its Address (or "Unicode ShortString")

    - by Andreas Rejbrand
    I want a string type that is Unicode and that stores the string directly at the adress of the variable, as is the case of the (Ansi-only) ShortString type. I mean, if I declare a S: ShortString and let S := 'My String', then, at @S, I will find the length of the string (as one byte, so the string cannot contain more than 255 characters) followed by the ANSI-encoded string itself. What I would like is a Unicode variant of this. That is, I want a string type such that, at @S, I will find a unsigned 32-bit integer (or a single byte would be enough, actually) containing the length of the string in bytes (or in characters, which is half the number of bytes) followed by the Unicode representation of the string. I have tried WideString, UnicodeString, and RawByteString, but they all appear only to store an adress at @S, and the actual string somewhere else (I guess this has do do with reference counting and such). Update: The most important reason for this is probably that it would be very problematic if sizeof(string) were variable. I suspect that there is no built-in type to use, and that I have to come up with my own way of storing text the way I want (which actually is fun). Am I right? Update I will, among other things, need to use these strings in packed records. I also need manually to read/write these strings to files/the heap. I could live with fixed-size strings, such as <= 128 characters, and I could redesign the problem so it will work with null-terminated strings. But PChar will not work, for sizeof(PChar) = 1 - it's merely an address. The approach I eventually settled for was to use a static array of bytes. I will post my implementation as a solution later today.

    Read the article

  • Cross-platform iteration of Unicode string

    - by kizzx2
    I want to iterate each character of a Unicode string, treating each surrogate pair and combining character sequence as a single unit (one grapheme). Example The text "??????" is comprised of the code points: U+0928, U+092E, U+0938, U+094D, U+0924, U+0947, of which, U+0938 and U+0947 are combining marks. static void Main(string[] args) { const string s = "??????"; Console.WriteLine(s.Length); // Ouptuts "6" var l = 0; var e = System.Globalization.StringInfo.GetTextElementEnumerator(s); while(e.MoveNext()) l++; Console.WriteLine(l); // Outputs "4" } So there we have it in .NET. We also have Win32's CharNextW() #include <Windows.h> #include <iostream> #include <string> int main() { const wchar_t * s = L"??????"; std::cout << std::wstring(s).length() << std::endl; // Gives "6" int l = 0; while(CharNextW(s) != s) { s = CharNextW(s); ++l; } std::cout << l << std::endl; // Gives "4" return 0; } Question Both ways I know of are specific to Microsoft. Are there portable ways to do it? I heard about ICU but I couldn't find something related quickly (UnicodeString(s).length() still gives 6). Would be an acceptable answer to point to the related function/module in ICU. C++ doesn't have a notion of Unicode, so a lightweight cross-platform library for dealing with these issues would make an acceptable answer.

    Read the article

  • PDF (VisPDF component) Problem with DecimalSeparator in Delphi/C++Builder2009

    - by Katsumi
    Hello. I use VisPDF component Delphi/C++Builder 2009 and show text with ShowMessage(FloatToStrF(1.23, ffFixed, 6, 2)); // Output: 1,23 (right!) UnicodeString Text = "Hello world!"; VPDF->CurrentPage->UnicodeTextOutStr( x, y, 0, Text); ShowMessage(FloatToStrF(1.23, ffFixed, 6, 2)); // Output: 1.23 (false!) afer UnicodeTextOutStr() the DecimalSeparator is changed. I have look in VisPDF source and found, that: Abscissa := Angle * Pi / 180; X := XProjection(X) + StrHeight * sin(Abscissa); Y := (YProjection(Y)) - StrHeight * cos(Abscissa); MtxA := cos(Abscissa); MtxB := sin(Abscissa); SetTextMatrix(MtxA, MtxB, -MtxB, MtxA, X, Y); with SetTextMatrix() show up the bug. comment out this line, DecimalSeparator is right, but no text in my pdf. procedure TVPDFPage.SetTextMatrix(a, b, c, d, x, y: Single); var S: AnsiString; begin S := _CutFloat(a) + ' ' + _CutFloat(b) + ' ' + _CutFloat(c) + ' ' + _CutFloat(d) + ' ' + _CutFloat(x) + ' ' + _CutFloat(y) + ' Tm'; SaveToPageStream(S); end; procedure TVPDFPage.SaveToPageStream(ValStr: AnsiString); begin PageContent.Add(string(ValStr)); // PageContent: TStringList; end; I don't understand this function. Can somebody help? VisPDF does not use any DLL or other software to create PDF files. Using VisPDF is very easy and have good examples.

    Read the article

  • C++ UTF-8 output with ICU

    - by Isaac
    I'm struggling to get started with the C++ ICU library. I have tried to get the simplest example to work, but even that has failed. I would just like to output a UTF-8 string and then go from there. Here is what I have: #include <unicode/unistr.h> #include <unicode/ustream.h> #include <iostream> int main() { UnicodeString s = UNICODE_STRING_SIMPLE("??????"); std::cout << s << std::endl; return 0; } Here is the output: $ g++ -I/sw/include -licucore -Wall -Werror -o icu_test main.cpp $ ./icu_test пÑÐ¸Ð²ÐµÑ My terminal and font support UTF-8 and I regularly use the terminal with UTF-8. My source code is in UTF-8. I think that perhaps I somehow need to set the output stream to UTF-8 because ICU stores strings as UTF-16, but I'm really not sure and I would have thought that the operators provided by ustream.h would do that anyway. Any help would be appreciated, thank you.

    Read the article

1