Search Results

Search found 5333 results on 214 pages for 'chunked encoding'.

Page 13/214 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • What encoding does InstallShield expect non-latin-alphabet string table entries to use?

    - by DNS
    I work on an app that gets distributed via a single installer containing multiple localizations. The build process includes a script that updates the .ism string table with translations for each supported language. This works fine for languages like French and German. But when testing the installer in, i.e. Japanese, the text shows up as a series of squares. It's unlikely to be a font problem, since the InstallShield-supplied strings show up fine; only the string table entries are mangled. So the problem seems to be that the strings are in the wrong encoding. The .ism is in XML format, with UTF-8 declared as its encoding, so I assumed the strings needed to be UTF-8 encoded as well. Do they actually need to use the encoding of the target platform? Is there any concern, then, about targets having different encodings, i.e. Chinese systems using one GB-encoding versus another? What is the right thing to do here?

    Read the article

  • How to test an application for correct encoding (e.g. UTF-8)

    - by Olaf
    Encoding issues are among the one topic that have bitten me most often during development. Every platform insists on its own encoding, most likely some non-UTF-8 defaults are in the game. (I'm usually working on Linux, defaulting to UTF-8, my colleagues mostly work on german Windows, defaulting to ISO-8859-1 or some similar windows codepage) I believe, that UTF-8 is a suitable standard for developing an i18nable application. However, in my experience encoding bugs are usually discovered late (even though I'm located in Germany and we have some special characters that along with ISO-8859-1 provide some detectable differences). I believe that those developers with a completely non-ASCII character set (or those that know a language that uses such a character set) are getting a head start in providing test data. But there must be a way to ease this for the rest of us as well. What [technique|tool|incentive] are people here using? How do you get your co-developers to care for these issues? How do you test for compliance? Are those tests conducted manually or automatically? Adding one possible answer upfront: I've recently discovered fliptitle.com (they are providing an easy way to get weird characters written "u?op ?pisdn" *) and I'm planning on using them to provide easily verifiable UTF-8 character strings (as most of the characters used there are at some weird binary encoding position) but there surely must be more systematic tests, patterns or techniques for ensuring UTF-8 compatibility/usage. Note: Even though there's an accepted answer, I'd like to know of more techniques and patterns if there are some. Please add more answers if you have more ideas. And it has not been easy choosing only one answer for acceptance. I've chosen the regexp answer for the least expected angle to tackle the problem although there would be reasons to choose other answers as well. Too bad only one answer can be accepted. Thank you for your input. *) that's "upside down" written "upside down" for those that cannot see those characters due to font problems

    Read the article

  • How can I specify the character encoding to be used by OLEDB when querying a DBF?

    - by Manga Lee
    Is it possible to specify which character encoding should be used by OLEDB when querying a DBF file? A possible work-around would be to encode the query string before the OLEDB call to the DBF file's character encoding and then encode all the results when they are returned. This will work but it would be nice if OLEDB or possibly ADO.NET could do this for me. UPDATE The suggestion by Viktor Jevdokimov does not seem to work automatically. But it made me investigate manual conversion of the strings. It is possible to use the TextInfo property of CultureInfo to find out the OemCodePage and the WindowsCodePage and use those to get the corresponding Encoding instances to perform manual conversion. But I can not get ADO.NET use these encondings to perform the conversion for me.

    Read the article

  • IWebBrowser: How to specify the encoding when loading html from a stream?

    - by Ian Boyd
    Using the concepts from the sample code provided by Microsoft for loading HTML content into an IWebBrowser from an IStream using the web browser's IPersistStreamInit interface: HRESULT LoadWebBrowserFromStream(IWebBrowser* pWebBrowser, IStream* pStream) { [snip] } How can one specify the encoding of the html inside the IStream? The IStream will contain a series of bytes, but the problem is what do those bytes represent? They could, for example, contain bytes where: each byte represents a character from the current Windows code-page (e.g. 1252) each byte could represent a character from the ISO-8859-1 character set the bytes could represent UTF-8 encoded characters every 2 bytes could represent a character, using UTF-16 encoding In my particular case, i am providing the IWebBrowser an IStream that contains a series of double-bytes characters (UTF-16), but the browser (incorrectly) believes that UTF-8 encoding is in effect. This results in garbled characters.

    Read the article

  • Set a script to automatically detect character encoding in a plain-text-file in Python?

    - by Haidon
    I've set up a script that basically does a large-scale find-and-replace on a plain text document. At the moment it works fine with ASCII, UTF-8, and UTF-16 (and possibly others, but I've only tested these three) encoded documents so long as the encoding is specified inside the script (the example code below specifies UTF-16). Is there a way to make the script automatically detect which of these character encodings is being used in the input file and automatically set the character encoding of the output file the same as the encoding used on the input file? findreplace = [ ('term1', 'term2'), ] inF = open(infile,'rb') s=unicode(inF.read(),'utf-16') inF.close() for couple in findreplace: outtext=s.replace(couple[0],couple[1]) s=outtext outF = open(outFile,'wb') outF.write(outtext.encode('utf-16')) outF.close() Thanks!

    Read the article

  • Reading chunked data from HttpEntity

    - by Gagan
    I have the following code: HttpClient FETCHER HttpResponse response = FETCHER.execute(host, httpMethod); Im trying to read its contents to a string like this: HttpEntity entity = response.getEntity(); InputStream st = entity.getContent(); StringWriter writer = new StringWriter(); IOUtils.copy(st, writer); String content = writer.toString(); The problem is, when i fetch http://www.google.co.in/ page, the transfer encoding is chunked, and i get only the first chunk. It fetches till first "". How do i get all the chunks at once so i can dump the complete output and do some processing on it ?

    Read the article

  • Convert InputStream to String with encoding given in stream data

    - by Quentin
    Hi, My input is a InputStream which contains an XML document. Encoding used in XML is unknown and it is defined in the first line of XML document. From this InputStream, I want to have all document in a String. To do this, I use a BufferedInputStream to mark the beginning of the file and start reading first line. I read this first line to get encoding and then I use an InputStreamReader to generate a String with the correct encoding. It seems that it is not the best way to achieve this goal because it produces an OutOfMemory error. Any idea, how to do it ? public static String streamToString(final InputStream is) { String result = null; if (is != null) { BufferedInputStream bis = new BufferedInputStream(is); bis.mark(Integer.MAX_VALUE); final StringBuilder stringBuilder = new StringBuilder(); try { // stream reader that handle encoding final InputStreamReader readerForEncoding = new InputStreamReader(bis, "UTF-8"); final BufferedReader bufferedReaderForEncoding = new BufferedReader(readerForEncoding); String encoding = extractEncodingFromStream(bufferedReaderForEncoding); if (encoding == null) { encoding = DEFAULT_ENCODING; } // stream reader that handle encoding bis.reset(); final InputStreamReader readerForContent = new InputStreamReader(bis, encoding); final BufferedReader bufferedReaderForContent = new BufferedReader(readerForContent); String line = bufferedReaderForContent.readLine(); while (line != null) { stringBuilder.append(line); line = bufferedReaderForContent.readLine(); } bufferedReaderForContent.close(); bufferedReaderForEncoding.close(); } catch (IOException e) { // reset string builder stringBuilder.delete(0, stringBuilder.length()); } result = stringBuilder.toString(); }else { result = null; } return result; } Regards, Quentin

    Read the article

  • Identity Claims Encoding for SharePoint

    - by Shawn Cicoria
    Just to remind myself, the list of claim types and their encodings are listed here at the bottom. http://msdn.microsoft.com/en-us/library/gg481769.aspx Where for example: i:0#.w|contoso\scicoria ‘i’ = identity, could be ‘c’ for others # == SPClaimTypes.UserLogonName . == Microsoft.IdentityModel.Claims.ClaimValueTypes.String Table for reference: Table 1. Claim types encoding Character Claim Type ! SPClaimTypes.IdentityProvider ” SPClaimTypes.UserIdentifier # SPClaimTypes.UserLogonName $ SPClaimTypes.DistributionListClaimType % SPClaimTypes.FarmId & SPClaimTypes.ProcessIdentitySID ‘ SPClaimTypes.ProcessIdentityLogonName ( SPClaimTypes.IsAuthenticated ) Microsoft.IdentityModel.Claims.ClaimTypes.PrimarySid * Microsoft.IdentityModel.Claims.ClaimTypes.PrimaryGroupSid + Microsoft.IdentityModel.Claims.ClaimTypes.GroupSid - Microsoft.IdentityModel.Claims.ClaimTypes.Role . System.IdentityModel.Claims.ClaimTypes.Anonymous / System.IdentityModel.Claims.ClaimTypes.Authentication 0 System.IdentityModel.Claims.ClaimTypes.AuthorizationDecision 1 System.IdentityModel.Claims.ClaimTypes.Country 2 System.IdentityModel.Claims.ClaimTypes.DateOfBirth 3 System.IdentityModel.Claims.ClaimTypes.DenyOnlySid 4 System.IdentityModel.Claims.ClaimTypes.Dns 5 System.IdentityModel.Claims.ClaimTypes.Email 6 System.IdentityModel.Claims.ClaimTypes.Gender 7 System.IdentityModel.Claims.ClaimTypes.GivenName 8 System.IdentityModel.Claims.ClaimTypes.Hash 9 System.IdentityModel.Claims.ClaimTypes.HomePhone < System.IdentityModel.Claims.ClaimTypes.Locality = System.IdentityModel.Claims.ClaimTypes.MobilePhone > System.IdentityModel.Claims.ClaimTypes.Name ? System.IdentityModel.Claims.ClaimTypes.NameIdentifier @ System.IdentityModel.Claims.ClaimTypes.OtherPhone [ System.IdentityModel.Claims.ClaimTypes.PostalCode \ System.IdentityModel.Claims.ClaimTypes.PPID ] System.IdentityModel.Claims.ClaimTypes.Rsa ^ System.IdentityModel.Claims.ClaimTypes.Sid _ System.IdentityModel.Claims.ClaimTypes.Spn ` System.IdentityModel.Claims.ClaimTypes.StateOrProvince a System.IdentityModel.Claims.ClaimTypes.StreetAddress b System.IdentityModel.Claims.ClaimTypes.Surname c System.IdentityModel.Claims.ClaimTypes.System d System.IdentityModel.Claims.ClaimTypes.Thumbprint e System.IdentityModel.Claims.ClaimTypes.Upn f System.IdentityModel.Claims.ClaimTypes.Uri g System.IdentityModel.Claims.ClaimTypes.Webpage Table 2. Claim value types encoding Character Claim Type ! Microsoft.IdentityModel.Claims.ClaimValueTypes.Base64Binary “ Microsoft.IdentityModel.Claims.ClaimValueTypes.Boolean # Microsoft.IdentityModel.Claims.ClaimValueTypes.Date $ Microsoft.IdentityModel.Claims.ClaimValueTypes.Datetime % Microsoft.IdentityModel.Claims.ClaimValueTypes.DaytimeDuration & Microsoft.IdentityModel.Claims.ClaimValueTypes.Double ‘ Microsoft.IdentityModel.Claims.ClaimValueTypes.DsaKeyValue ( Microsoft.IdentityModel.Claims.ClaimValueTypes.HexBinary ) Microsoft.IdentityModel.Claims.ClaimValueTypes.Integer * Microsoft.IdentityModel.Claims.ClaimValueTypes.KeyInfo + Microsoft.IdentityModel.Claims.ClaimValueTypes.Rfc822Name - Microsoft.IdentityModel.Claims.ClaimValueTypes.RsaKeyValue . Microsoft.IdentityModel.Claims.ClaimValueTypes.String / Microsoft.IdentityModel.Claims.ClaimValueTypes.Time 0 Microsoft.IdentityModel.Claims.ClaimValueTypes.X500Name 1 Microsoft.IdentityModel.Claims.ClaimValueTypes.YearMonthDuration

    Read the article

  • UTF-8 encoding problem with flash mysql and php

    - by alibhp
    Hi, As you may know, I am programming an on-line game using FLASH. I am connecting my FLASH 8 movie with MySQL database through PHP. I am doing very good in that, and I have everything working fine. The problems come when I am trying to insert (Using the INSERT SQL func) data to the database that are non-english. In other words, UTF-8 data. I red a lot of articls about that stuff and found and apply the fallowing: 1. In PHP4, you need to tell the PHP to use UTF-8 when using the xml_parser_crater() func, however, in PHP5 that is done automatically. Even though I told PHP5 to use the UTF-8 when calling the func. Adding the header to the XML sent to PHP from flash. Force the FLASH to use UTF-8 encoding in the preference options. Set the encoding in MySQL to UTF-8 (utf8_unicode_ci with InnoDB engine). I can read and insert the other language data correctly in the phpadmin as well. I did all that in my coding, and still I can't insert such data. one more strange thing is that, when I use the same link, that the FLASH using, with the XML, that the FLASH creating, on the browser (google chrome), I got the data inserted right in the database!!!!! I am about to get crazy about that stuff, What am I missing? what cause the problem? Thank you in advance.

    Read the article

  • How to make TXMLDocument (with the MSXML Implementation) always include the encoding attribute?

    - by Fabricio Araujo
    I have legacy code (I didn't write it) that always included the encoding attribute, but recompiling it to D2010, TXMLDocument doesn't include the enconding anymore. Because the XML data have accented characters both on tags and data, TXMLDocument.LoadFromFile simply throws EDOMParseErros saying that an invalid character is found on the file. Relevant code: Doc := TXMLDocument.Create(nil); try Doc.Active := True; Doc.Encoding := XMLEncoding; RootNode := Doc.CreateElement('Test', ''); Doc.DocumentElement := RootNode; <snip> //Result := Doc.XMl.Text; Doc.SaveToXML(Result); // Both lines gives the same result On older versions of Delphi, the following line is generated: <?xml version="1.0" encoding="ISO-8859-1"?> On D2010, this is generated: <?xml version="1.0"?> If I change manually the line, all works like always worked in the last years. UPDATE: XMLEncoding is a constant and is defined as follow XMLEncoding = 'ISO-8859-1';

    Read the article

  • How can I avoid encoding mixups of strings in a C/C++ API?

    - by Frerich Raabe
    I'm working on implementing different APIs in C and C++ and wondered what techniques are available for avoiding that clients get the encoding wrong when receiving strings from the framework or passing them back. For instance, imagine a simple plugin API in C++ which customers can implement to influence translations. It might feature a function like this: const char *getTranslatedWord( const char *englishWord ); Now, let's say that I'd like to enforce that all strings are passed as UTF-8. Of course I'd document this requirement, but I'd like the compiler to enforce the right encoding, maybe by using dedicated types. For instance, something like this: class Word { public: static Word fromUtf8( const char *data ) { return Word( data ); } const char *toUtf8() { return m_data; } private: Word( const char *data ) : m_data( data ) { } const char *m_data; }; I could now use this specialized type in the API: Word getTranslatedWord( const Word &englishWord ); Unfortunately, it's easy to make this very inefficient. The Word class lacks proper copy constructors, assignment operators etc.. and I'd like to avoid unnecessary copying of data as much as possible. Also, I see the danger that Word gets extended with more and more utility functions (like length or fromLatin1 or substr etc.) and I'd rather not write Yet Another String Class. I just want a little container which avoids accidental encoding mixups. I wonder whether anybody else has some experience with this and can share some useful techniques. EDIT: In my particular case, the API is used on Windows and Linux using MSVC 6 - MSVC 10 on Windows and gcc 3 & 4 on Linux.

    Read the article

  • How can i make changes to this file Encoding?

    - by SuperUserMan
    I have these 3 files 21/08/2014 07:15 PM 122 Tw2AWK.csv 21/08/2014 07:15 PM 125 Tw2Notepad.csv 21/08/2014 07:15 PM 119 Tw2REPL.csv C:\myfilesfile Tw2AWK.csv TwREPL.csv Tw2Notepad.csv Tw2AWK.csv; UTF-8 Unicode text, with CRLF line terminators Tw2REPL.csv; UTF-8 Unicode text Tw2Notepad.csv; UTF-8 Unicode (with BOM) text, with CRLF line terminators HEX of these files is as follows C:\myfilesxxd -p Tw2REPL.csv 0a222344656c686947616e675261706520776173206120736d616c6c2069 6e636964656e7420746f2023536d616c6c5261706973744a6169746c6579 20646e61696e6469612e636f6d2f696e6469612f7265706f72742d69e280 a6207069632e747769747465722e636f6d2f6762565070776637744f22 C:\myfilesxxd -p Tw2AWK.csv 0d0a222344656c686947616e675261706520776173206120736d616c6c20 696e636964656e7420746f2023536d616c6c5261706973744a6169746c65 7920646e61696e6469612e636f6d2f696e6469612f7265706f72742d69e2 80a6207069632e747769747465722e636f6d2f6762565070776637744f22 0d0a C:\myfilesxxd -p Tw2Notepad.csv efbbbf0d0a222344656c686947616e675261706520776173206120736d61 6c6c20696e636964656e7420746f2023536d616c6c5261706973744a6169 746c657920646e61696e6469612e636f6d2f696e6469612f7265706f7274 2d69e280a6207069632e747769747465722e636f6d2f6762565070776637 744f220d0a I want Tw2REPL.csv to look like Tw2Notepad.csv How can I do it? NOTE: I have do this all via command line (batch) . I can use any 3rd party standalone exe's though. I am on Windows XP Please help, its very important for me

    Read the article

  • Determining default character set of platform in Java

    - by Anand
    I am programming in Java I have the code as: byte[] b = test.getBytes(); In the api it is specified that if we do not specify character encoding it takes the default platform character encoding. What is meant by "default platform character encoding" ? Does it mean the Java encoding or the OS encoding ? If it means OS encoding the how can i check the default character encoding of Windows and Linux ? Is there anyway we can get the default character encoding using command line ?

    Read the article

  • two byte character or one byte character

    - by RBrattas
    Hi, How can I see if the input string is a two byte character or one byte character; and from which encoding system the character is coming from? I am using C# and SilverLight; I assume I could find the encoding the computer is running and then the character? Any code snippet? Thank you, Rune // Get a UTF-32 encoding by codepage.Encoding Encoding_12000_instance = Encoding.GetEncoding(12000); // Get a UTF-32 encoding by name.Encoding Encoding_UTF32_instance = Encoding.GetEncoding("utf-32");

    Read the article

  • Why does xvid encoding lag/lock up windows 7?

    - by acidzombie24
    It seems to encode just fine so you can see the results http://www.sendspace.com/file/msku4q If you look at the mouse cursor you'll see firefox locks up once i click it. Calculator seems fine but when i try to move it, it locks up. The resource monitor and task manager are up so you can see if the CPU is being used up. It isnt as you can see <30% was used.

    Read the article

  • Which encoding (code page) is used for file names in ZIP archive under Mac OS x 10.6

    - by bao
    I have a zip library SharpZipLib which intended to work with ZIP archives using C#. It has parameter ICSharpCode.SharpZipLib.Zip.ZipConstants.DefaultCodePage which specifies encoding of file names in zip archive. I know that in Windows and in OSX different encodings are used to store file names. 1) Which encodings (code pages) are used in both? 2) How to determine programmatically which encoding is used? When I open in Win7 zip file packed under MacOS X, I see files with bad names (originally - cyrillic) and folder called __MACOSX, so I can say zip was prepared on Mac box. Any other way? What about other UNIX like systems?

    Read the article

  • Servlets response.sendRedirect(String url) doesn't seems to send the encoding, why?

    - by Daziplqa
    Hi folks, I have some Servlet that explicity sets the character encoding and redirect to some servlet class Servlet1 extends HttpServle{ void doGet(..... ){ // ... request.setCharacterEncoding("UTF-8"); response.setCharacterEncoding("UTF-8"): //...... response.redirect(servlet2); } } class Servlet2 extends HttpServle{ void doGet(..... ){ // ... request.getCharacterEncoding(); // prints null ?? why??? //...... } } So, why the character encoding not being send with the request?

    Read the article

  • Can I send a POST form in an encoding other than of its body?

    - by Daziplqa
    Hi gang, I've Html page that looks like: <HTML> <meta http-equiv='Content-Type' content='text/html; charset=gb2312'> <BODY onload='document.forms[0].submit();'> <form name="form" method="post" action="/path/to/some/servlet"> <input type="hidden" name="username" value="??"> <!-- UTF-8 characters --> </form> </BODY> </HTML> As you can see, the content of this page is UTF-8, but I need to send it with GB2312 character encoding, as the servlet that I am sending this page to expects from me GB2312. Is this a valid scenario? Because in the servlet, I couldn't retive these chines characters back using a filter that sets the character encoding to GB2312!! Please help

    Read the article

  • Why does Term::Size seem to mess up Perl's output encoding?

    - by sid_com
    Hello! The Term::Size-module jumbles up the encoding. How can I fix this? #!/usr/bin/env perl use warnings; use strict; use 5.010; use utf8; binmode STDOUT, ':encoding(UTF-8)'; use Term::Size; my $string = 'Hällö'; say $string; my $columns = ( Term::Size::chars *STDOUT{IO} )[0]; say $columns; say $string; Output: Hällö 140 H?ll?

    Read the article

  • Wrong encoding in DataReceivedEventArgs

    - by user2102508
    I start cmd.exe process and redirect stdin to pass script to it and redirect stdout and stderr to read cmd's output. Here is the code of my DataReceivedEventHandler: (o, a) => { if(!String.IsNullOrEmpty(a.Data)) { bw.Write(a.Data.ToUTF8()); bw.Write((byte)'\n'); } } In the code bw is instance of BinaryWriter, ToUTF8 is string extension method, that converts a string to UTF8 encoded byte array. When I use this code in a separate process it works well, however when I use this code as a shared library inside some other process a.Data doesn't contain valid localized characters (like russian characters for example). So how should I convert characters? How to get cmd's OEM encoding? Why does the code works well in a separate process and doesn't work as a shared library inside some other process?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >