Search Results

Search found 9 results on 1 pages for 'codepages'.

Page 1/1 | 1 

  • ANSI or OEM Codepage when using MME and DirectMusic?

    - by Carl Seleborg
    Hello, I noticed that when reading MIDI port names from MME, the names are multi-byte strings encoded using the ANSI Codepage, which my app uses by default. When receiving those names from the DirectMusic driver, the names are wide-character strings encoded with the OEM Codepage. See this article by Raymond Chen for a quick refresher on Codepages. On my German system, this means that when using the current codepage, which turns out to be the ANSI one, I get "Audiogerät" from MME, and "Audiogeröt" from DirectMusic, the latter being wrong. This gets fixed when I treat that last name as OEM-encoded instead. So how do I know with which codepage to decode those names? Why does the name coming from DirectMusic get encoded differently? Does it come from the USB driver? The COM framework? DirectMusic? How can I know for sure which codepage to use when reading the names of my MIDI ports? For info: I use the MultiByteToWideChar() and WideCharToMultiByte() functions to perform the conversions, with CP_ACP and CP_OEMCP as argument for the codepage to use. I use midiInGetDeviceCaps() to get MIDI port information from the MME subsystem... ... and convert MIDIINCAPS.szPname using the CP_ACP (ANSI) codepage. I use IID_IDirectMusic8::EnumPort() to get port information from DirectMusic... ... and convert DMUS_PORTCAPS.wszDescription using the CP_OEMCP codepage.

    Read the article

  • SQL Server -> 'SQL_Latin1_General_CP1_CI_AS' Collation -> Varchar Column -> Languages Supported

    - by Ajay Singh
    All, We are using SQL Server 2008 with Collation Setting as 'SQL_Latin1_General_CP1_CI_AS'. We are using Varchar column to store textual data. We know that we cannot store Double Byte data in Varchar column and hence cannot support languages like Japanese and Chinese without converting it to NVarchar. However, will it be safe to say that all Single Byte Characters can be stored in Varchar column without any problem? If yes then from where can I get the list of languages which needs Single Byte for storage and the list of languages which needs double byte? Any assistance in this regard is highly appreciated. Thanks in advance.

    Read the article

  • Programmatically change the default code page in Windows XP? (from Delphi)

    - by Duncan
    Could anyone advise how to programmatically change the default Windows XP code page (I'm doing this from Delphi)? (This would be the equivalent of going into Control Panel - Regional Settings - Language for non-Unicode applications). In this case, I want to switch to Chinese (PRC) and so am writing to the following registry strings: HKLM\SYSTEM\CurrentControlSet\Control\Nls\CodePage\ ACP=936 MACCP=10008 OEMCP=936 (Which is exactly what changing the non-Unicode codepage drop down in Control Panel does). There must be another setting which I need to change - and I'd prefer to use a Win API call (if available) rather than writing to the registry myself. I've also tried setting HKLM\SYSTEM\CurrentControlSet\Control\Nls\Language\ Default=0804 (Chinese PRC) to no avail. I don't want to change the 'locale' per se as this will also change time / date settings, separators, etc. etc. This is because I'm using an ANSI application that needs to render Chinese characters, and I'm writing a tool to automatically switch the system show the characters (while leaving other aspects of the UI intact). Thanks! Duncan

    Read the article

  • Outlook is unable to accept french-accented characters in my mailto string?

    - by 4501
    Outlook is causing some problems when being passed a mailto string with accented characters in it. Changing the codepage for my entire webpage that has this string on it solves this problem, but that causes other problems in the system, so I would not like to do that. A string like such returns a lot of garbage characters: "mailto:[email protected]?subject=Mon bâtiment / Départementé / Bureau n'est pas répertorié" Meanwhile, this cuts off the character after the "D" "mailto:[email protected]?subject=Mon bâtiment / Départementé / Bureau n'est pas répertorié" What gives? Is there no way to make this work? I am in Canada, so some regional issues might be taking effect here?

    Read the article

  • How to get encoding from MAPI message with PR_BODY_A tag (windows mobile)?

    - by SadSido
    Hi, everyone! I am developing a program, that handles incoming e-mail and sms through windows-mobile MAPI. The code basically looks like that: ulBodyProp = PR_BODY_A; hr = piMessage->OpenProperty(ulBodyProp, NULL, STGM_READ, 0, (LPUNKNOWN*)&piStream); if (hr == S_OK) { // ... get body size in bytes ... STATSTG statstg; piStream->Stat(&statstg, 0); ULONG cbBody = statstg.cbSize.LowPart; // ... allocate memory for the buffer ... BYTE* pszBodyInBytes = NULL; boost::scoped_array<BYTE> szBodyInBytesPtr(pszBodyInBytes = new BYTE[cbBody+2]); // ... read body into the pszBodyInBytes ... } That works and I have a message body. The problem is that this body is multibyte encoded and I need to return a Unicode string. I guess, I have to use ::MultiByteToWideChar() function, but how can I guess, what codepage should I apply? Using CP_UTF8 is naive, because it can simply be not in UTF8. Using CP_ACP works, well, sometimes, but sometimes does not. So, my question is: how can I retrieve the information about message codepage. Does MAPI provide any functions for it? Or is there a way to decode multibyte string, other than MultiByteToWideChar()? Thanks!

    Read the article

  • Convert extended ASCII characters to it's right presentation using Console.ReadKey() method and ConsoleKeyInfo variable

    - by mishamosher
    Readed about 30 minutes, and didn't found some specific for this in this site. Suppose the following, in C#, console application: ConsoleKeyInfo cki; cki = Console.ReadKey(true); Console.WriteLine(cki.KeyChar.ToString()); //Or Console.WriteLine(cki.KeyChar) as well Console.ReadKey(true); Now, let's put ¿ in the console entry, and asign it to cki via a Console.ReadKey(true). What will be shown isn't the ¿ symbol, the ¨ symbol is the one that's shown instead. And the same happens with many other characters. Examples: ñ shows ¤, ¡ shows -, ´ shows ï. Now, let's take the same code snipplet and add some things for a more Console.ReadLine() like behavior: string data = string.Empty; ConsoleKeyInfo cki; for (int i = 0; i < 10; i++) { cki = Console.ReadKey(true); data += cki.KeyChar; } Console.WriteLine(data); Console.ReadKey(true); The question, how to handle this by the right way, end printing the right characters that should be stored on data, not things like ¨, ¤, -, ï, etc? Please note that I want a solution that works with ConsoleKeyInfo and Console.ReadKey(), not use other variable types, or read methods. EDIT: Because ReadKey() method, that comes from Console namespace, depends on Kernel32.dll and it definetively bad handles the extended ASCII and unicode, it's not an option anymore to just find a valid conversion for what it returns. The only valid way to handle the bad behavior of ReadKey() is to use the cki.Key property that's written in cki = Console.ReadKey(true) execution and apply a switch to it, then, return the right values on dependence of what key was pressed. For example, to handle the Ñ key pressing: string data = string.Empty; ConsoleKeyInfo cki; cki = Console.ReadKey(true); switch (cki.Key) { case ConsoleKey.Oem3: if (cki.Modifiers.ToString().Contains("Shift")) //Could added handlers for Alt and Control, but not putted in here to keep the code small and simple data += "Ñ"; else data += "ñ"; break; } Console.WriteLine(data); Console.ReadKey(true); So, now the question has a wider focus... Which others functions completes it's execution with only one key pressed, and returns what's pressed (a substitute of ReadKey())? I think that there's not such substitutes, but a confirmed answer would be usefull. EDIT2: HA! Found the way, for something I used for so many times Windows 98 SE. There are the codepages, the ones responsibles for how's presented the info in the console. ReadLine() reconfigures the codepage to use properly the extended ASCII and Unicode characters. ReadKey() leaves it in EN-US default (codepage 850). Just use a codepage that prints the characters you want, and that's all. Refer to http://en.wikipedia.org/wiki/Code_page for some of them :) So, for the Ñ key press, the solution is this: Console.OutputEncoding = Encoding.GetEncoding(1252); //Also 28591 is valid for `Ñ` key, and others too string data = string.Empty; ConsoleKeyInfo cki; cki = Console.ReadKey(true); data += cki.KeyChar; Console.WriteLine(data); Console.ReadKey(true); Simple :) Now I'm wrrr with myself... how could I forget those codepages!? Question answered, so, no more about this!

    Read the article

  • Efficient mapping for a particular finite integer set

    - by R..
    I'm looking for a small, fast (in both directions) bijective mapping between the following list of integers and a subset of the range 0-127: 0x200C, 0x200D, 0x200E, 0x200F, 0x2013, 0x2014, 0x2015, 0x2017, 0x2018, 0x2019, 0x201A, 0x201C, 0x201D, 0x201E, 0x2020, 0x2021, 0x2022, 0x2026, 0x2030, 0x2039, 0x203A, 0x20AA, 0x20AB, 0x20AC, 0x20AF, 0x2116, 0x2122 One obvious solution is: y = x>>2 & 0x40 | x & 0x3f; x = 0x2000 | y<<2 & 0x100 | y & 0x3f; Edit: I was missing some of the values, particularly 0x20Ax, which don't work with the above. Another obvious solution is a lookup table, but without making it unnecessarily large, a lookup table would require some bit rearrangement anyway and I suspect the whole task can be better accomplished with simple bit rearrangement. For the curious, those magic numbers are the only "large" Unicode codepoints that appear in legacy ISO-8859 and Windows codepages.

    Read the article

  • Space-saving character encoding for japanese?

    - by Constantin
    In my opinion a common problem: character encoding in combination with a bitmap-font. Most multi-language encodings have an huge space between different character types and even a lot of unused code points there. So if I want to use them I waste a lot of memory (not only for saving multi-byte text - i mean specially for spaces in my bitmap-font) - and VRAM is mostly really valuable... So the only reasonable thing seems to be: Using an custom mapping on my texture for i.e. UTF-8 characters (so that no space is waste). BUT: This effort seems to be same with use an own proprietary character encoding (so also own order of characters in my texture). In my specially case I got texture space for 4096 different characters and need characters to display latin languages as well as japanese (its a mess with utf-8 that only support generall cjk codepages). Had somebody ever a similiar problem (I really wonder, if not)? If theres already any approach? Edit: The same Problem is described here http://www.tonypottier.info/Unicode_And_Japanese_Kanji/ but it doesnt provide an real solution how to save these bitmapfont mappings to utf-8 space efficent. So any further help is welcome!

    Read the article

1