Search Results

Search found 5303 results on 213 pages for 'encoding'.

Page 5/213 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • What Character Encoding Is This?

    - by Canoehead
    I need to clean up some file containing French text. Problem is that the files erroneously contain multiple encodings within the same file. I think some sections are ISO8859-1 (Latin 1) but other parts have text encoded in single byte characters that look like 'extended' ASCII. In other words, it is UTF-7 encoding plus the following: 0x82 for é (e acute) 0x8a for è (e grave) 0x88 for ê (e circumflex) 0x85 for à (a grave) 0x87 for ç (c cedilla) What encoding is this?

    Read the article

  • Strange encoding when using PHP with translated text.

    - by The Rook
    I am using Google translate with PHP to translate text. 99% of the text comes back with the expected encoding. However, A few characters become malformed and appear to be encoded incorrectly. How can I account for this encoding using PHP? Hierdie is \u0026#39;n This is in afrikaans, but other languages are also affected.

    Read the article

  • Autodetect console output encoding in perl

    - by n0rd
    I have a perl script that prints some information to console in Russian. Script will be executed on several OSes, so console encoding can be cp866, koi8-r, utf-8, or some other. Is there a portable way to detect console encoding so I can setup STDOUT accordingly so the text is printed correctly?

    Read the article

  • determing server response encoding

    - by user121196
    not java specific, but when I say OutputStream os = sock.getOutputStream(); is there a way to determine stream's encoding charset? or do I have to know encoding charset ahead of time to properly read it? This is for arbitrary socket connection.

    Read the article

  • Base X string encoding

    - by Paul Stone
    I'm looking for a routine that will encode a string (stream of bytes) into an arbitrary base/alphabet (like base64 encoding but I get to choose the alphabet). I've seen a few routines that do base X encoding for a number, but not for a string.

    Read the article

  • HTML Encoding with ASP.NET

    - by Corin
    I am currently html encoding all user entered text before inserting/updating a db table record. The problem is that on any subsequent updates, the previously encoded string is reencoded. This endless loop is starting to eat up alot of column space in my tables. I am using parameterized queries for all sql statements but am wondering would it be safe to just let the .NET Framework handle this part without the HTML Encoding?

    Read the article

  • How to tell the Browser the character encoding of a HTML website regardless of Server Content.-Type Headers?

    - by hakre
    I have a HTML page that correctly (the encoding of the physical on disk matches it) announces it's Content-Type: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta http-equiv="Content-Type" content= "text/html; charset=utf-8"> <title> ... Opening the file from disk in browser (Google Chrome, Firefox) works fine. Requesting it via HTTP, the webserver sends a different Content-Type header: $ curl -I http:/example.com/file.html HTTP/1.1 200 OK Date: Fri, 19 Oct 2012 10:57:13 GMT ... Content-Type: text/html; charset=ISO-8859-1 (see last line). The browser then uses ISO-8859-1 to display which is an unwanted result. Is there a common way to override the server headers send to the browser from within the HTML document?

    Read the article

  • ASP.Net menu databinding encoding problem

    - by WtFudgE
    Hi, I have a menu where I bind data through: XmlDataSource xmlData = new XmlDataSource(); xmlData.DataFile = String.Format(@"{0}{1}\Navigation.xml", getXmlPath(), getLanguage()); xmlData.XPath = @"/Items/Item"; TopNavigation.DataSource = xmlData; TopNavigation.DataBind(); The problem is when my xml has special characters, since I use a lot of french words. As an alternative I tried using a stream instead and using encoding to get the special characters, with the following code: StreamReader strm = new StreamReader(String.Format(@"{0}{1}\Navigation.xml", getXmlPath(), getLanguage()), Encoding.GetEncoding(1254)); XmlDocument xDoc = new XmlDocument(); xDoc.Load(strm); XmlDataSource xmlData = new XmlDataSource(); xmlData.ID = "TopNav"; xmlData.Data = xDoc.InnerXml; xmlData.XPath = @"/Items/Item"; TopNavigation.Items.Clear(); TopNavigation.DataSource = xmlData; TopNavigation.DataBind(); The problem I'm having now is that my data doesn't refresh when I change the path where the stream gets read. When I skip through the code it does, but not on my page. So the thing is either, how do I get the data te be refreshed? Or (which is actually preferred) how do I get the encoding right in the first piece of code? Help is highly apreciated!

    Read the article

  • How to keep character encoding with database queries.

    - by JasonS
    Hi, I am doing the following. 1) I am exporting a database and saving it to a file called dump.sql. 2) The file is then transferred to a different server via PHP ftp. 3) When the file has been successfully transferred the administrator has an option to run a 'dbtransfer' script on the new host. 4) This script blows up the script and runs the queries line by line. This works great - however there is a problem with foreign language encoding. We are using UTF-8. Step 1 : This works fine, file is in UTF-8 Format. Step 3 : When I test the contents of the dump.sql file using mb_check_encoding(). The string comes back as UTF-8. Step 4 : This creates tables with utf8_general_ci encoding. The information is dumped in. When I check the table after the transfer I get records like this: 'ç,Ç,ö,Ö,ü,Ü,ı,İ,ş,Ş,ğ,Ğ'. I don't understand how a UTF-8 string can lose its encoding when it goes into the database. Am I missing a step? Do I need to run some sort of function to ensure the string is parsed as UTF-8? Once the system is installed I can save foreign language queries. It is just the transfer that is messing up. Any ideas?

    Read the article

  • Encode and Decode using UTF-8 in iphone

    - by Ekra
    Hi friends, I wanted an example were in I can encode and then decode the same string using UTF-8. Encode and then Decode means I want to implement the method in 2 area were one can encode it and other is able to decode it. I have seen the API but I didnt get much success:- StringWithCString:encoding: stringWithUTF8String: stringWithCString:(const char *)cString encoding:(NSStringEncoding)enc; =================EDITED================= I have string as "øæ-test-2.txt" . when I am encoding it char *s = "øæ-test-2.txt"; NSString *enc = [NSString stringWithCString:s encoding:NSASCIIStringEncoding]; I am getting "øæ-test-2.txt" as output. Now I want to get back the original string back i.e. "øæ-test-2.txt" +++++++++EDITED+++++++++++++++++++ I am getting "øæ-test-2.txt" from server and I need "øæ-test-2.txt" by decoding it . I am able to get the output from the link below http://www.cafewebmaster.com/online_tools/utf_decode Please try to use the link and u will understand my concern. I need the solution on urgent basis. It would be highly appreciated if anyone can give some hint or tutorial in right direction. Regards

    Read the article

  • StAX - Setting the version and encoding using XMLStreamWriter

    - by Anurag
    Hi, I am using StAX for creating XML files and then validating the file with and XSD. I am getting an error while creating the XML file: javax.xml.stream.XMLStreamException: Underlying stream encoding 'Cp1252' and input paramter for writeStartDocument() method 'UTF-8' do not match. at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.writeStartDocument(XMLStreamWriterImpl.java:1182) Here is the code snippet: XMLOutputFactory xof = XMLOutputFactory.newInstance(); try{ XMLStreamWriter xtw = xof.createXMLStreamWriter(new FileWriter(fileName)); xtw.writeStartDocument("UTF-8","1.0");} catch(XMLStreamException e) { e.printStackTrace(); } catch(IOException ie) { ie.printStackTrace(); } I am running this code on unix. Does anybody know how to set the version and encoding style.

    Read the article

  • Encoding issues with Spring and Freemarker

    - by Cameron
    I'm working on a project using Freemarker and Spring running on Jetty. It will involve displaying characters from many different countries so I'm trying to set the encoding to UTF-8. However, no matter what I do, it remains ISO-8859-1. I tried to create a filter in my web.xml and I've tried putting this response.setCharacterEncoding("UTF-8"); response.setContentType("text/html; charset=utf-8"); just before rendering the view. But when I load the page and click "View Page Info", the encoding is always ISO-8859-1. I've also tried hitting my app server directly to see if it was being affected by Apache but got the same result. Any help is appreciated.

    Read the article

  • Encoding::UndefinedConversionError from email body

    - by raam86
    using mail for ruby I am getting this message: mail.rb:22:in `encode': "\xC7" from ASCII-8BIT to UTF-8 (Encoding::UndefinedConversionError) from mail.rb:22:in `<main>' If I remove encode I get a message ruby /var/lib/gems/1.9.1/gems/bson-1.7.0/lib/bson/bson_ruby.rb:63:in `rescue in to_utf8_binary': String not valid utf-8: "<div dir=\"ltr\"><div class=\"gmail_quote\">l<br><br><br><div dir=\"ltr\"><div class=\"gmail_quote\"><br><br><br><div dir=\"ltr\"><div class=\"gmail_quote\"><br><br><br><div dir=\"ltr\"><div dir=\"rtl\">\xC7\xE1\xE4\xD5 \xC8\xC7\xE1\xE1\xDB\xC9 \xC7\xE1\xDA\xD1\xC8\xED\xC9</div></div>\r\n</div><br></div>\r\n</div><br></div>\r\n</div><br></div>" (BSON::InvalidStringEncoding) This is my code: require 'mail' require 'mongo' connection = Mongo::Connection.new db = connection.db("DB") db = Mongo::Connection.new.db("DB") newsCollection = db["news"] Mail.defaults do retriever_method :pop3, :address => "pop.gmail.com", :port => 995, :user_name => 'my_username', :password => '*****', :enable_ssl => true end emails = Mail.last #Checks if email is multipart and decods accordingly. Put to extract UTF8 from body plain_part = emails.multipart? ? (emails.text_part ? emails.text_part.body.decoded : nil) : emails.body.decoded html_part = emails.html_part ? emails.html_part.body.decoded : nil mongoMessage = {"date" => emails.date.to_s , "subject" => emails.subject , "body" => plain_part.encode('UTF-8') } msgID = newsCollection.insert(mongoMessage) #add the document to the database and returns it's ID puts msgID For English and Hebrew it works perfectly but it seems gmail is sending arabic with different encoding. Replacing UTF-8 with ASCII-8BIT gives a similar error. I get the same result when using plain_part for plain email messages. I am handling emails from one specific source so I can put html_part with confidence it's not causing the error. To make it extra weird Subject in Arabic is rendered perfectly. What encoding should I use?

    Read the article

  • Dealing with ISO-encoding in AJAX requests (prototype)

    - by acme
    I have a HTML-page, that's encoded in ISO-8859-1 and a Prototype-AJAX call that's build like this: new Ajax.Request('api.jsp', { method: 'get', parameters: {...}, onSuccess: function(transport) { var ajaxResponse = transport.responseJSON; alert(ajaxResponse.msg); } }); The api.jsp returns its data in ISO-8859-1. The response contains special characters (German Umlauts) that are not displayed correctly, even if I add a "encoding: ISO-8895-1" to the AJAX-request. Does anyone know how to fix this? If I call api.jsp in a new browser window separately the special characters are also corrupt. And I can't get any information about the used encoding in the response header. The response header looks like this: Server Apache-Coyote/1.1 Content-Type application/json Content-Length 208 Date Thu, 29 Apr 2010 14:40:24 GMT Notice: Please don't advice the usage of UTF-8. I have to deal with ISO-8859-1.

    Read the article

  • Text encoding problem between NSImage, NSData, and NSXMLDocument

    - by andyvn22
    I'm attempting to take an NSImage and convert it to a string which I can write in an XML document. My current attempt looks something like this: [xmlDocument setCharacterEncoding: @"US-ASCII"]; NSData* data = [image TIFFRepresentation]; NSString* string = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding]; //Put string inside of NSXMLElement, write out NSXMLDocument. Reading back in looks something like this: NSXMLDocument* newXMLDocument = [[NSXMLDocument alloc] initWithData:data options:0 error:outError]; //Here's where it fails. I get: //Error Domain=NSXMLParserErrorDomain Code=9 UserInfo=0x100195310 "Line 7: Char 0x0 out of allowed range" I assume I'm missing something basic. What's up with this encoding issue?

    Read the article

  • Apache deflate with chucked encoding

    - by hoodoos
    I'm expiriencing some problem with one of my data source services. As it says in HTTP response headers it's running on Apache-Coyote/1.1. Server gives responses with Transfer-Encoding: chunked, here sample response: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 06:13:52 GMT And problem is when I'm requesting server to send gzipped request it often sends not full response. I recieve response, see that last chunk recived, but then after ungzipping I see that response is partial. So my question is: is it common apache issue? maybe one of it's mod_deflate plugins or something? Ask questions if you need more info. Thanks.

    Read the article

  • xml with special character, encoding utf-8

    - by Sergio Morieca
    I have a few simple questions, because I got confused reading all difference responses. 1) If I have an xml with prolog: and I'm going to unmarshall it with Java (for example: JaXB). I suppose, that I can't put CROSS OF LORRAINE (http://www.fileformat.info/info/unicode/char/2628/index.htm) inside, but I can put "\u2628", correct? 2) I've also heard that UTF-8 doesn't contain it, but anything in Unicode can be saved with encoding UTF-8 (or UTF-16), and here is an example from this page: UTF-8 (hex) 0xE2 0x98 0xA8 (e298a8) Is my reasoning correct? Can I use this form and put it in the xml with utf-8 encoding?

    Read the article

  • File Encoding handling in Eclipse 3.5

    - by Cédric Girard
    Hi, I use Eclipse 3.5 on Windows, with PDT and Subclipse plugins, with both legacy projects using ISO-8859-1 encoding (latin-1), and newers ones wich use UTF-8. I configured my workspace to use UTF-8, and I configured old projects to use latin-1. But every time I open an old project, it use UTF-8. With a workspace using latin-1 by default, I have the same problem with utf-8 projects edited as iso-8859-1. My encoding choice is written in the file .settings/org.eclipse.core.resources.prefs but seems to be never read. The only solution for now is to have a latin1 workspace, and an utf8 one. Any better idea? Regards, Cédric

    Read the article

  • Handling Character Encoding in URI on Tomcat

    - by ZZ Coder
    On the web site I am trying to help with, user can type in an URL in the browser, like following Chinese characters, http://localhost:8080?a=?? On server, we get GET /a=%E6%B5%8B%E8%AF%95 HTTP/1.1 As you can see, it's UTF-8 encoded, then URL encoded. We can handle this correctly by setting encoding to UTF-8 in Tomcat. However, sometimes we get Latin1 encoding on certain browsers, http://localhost:8080?a=ß turns into GET /a=%DF HTTP/1.1 Is there anyway to handle this correctly in Tomcat? Looks like the server has to do some intelligent guessing. We don't expect to handle the Latin1 correctly 100% but anything is better than what we are doing now by assuming everything is UTF-8. The server is Tomcat 5.5. The supported browsers are IE 6+, Firefox 2+ and Safari on iPhone.

    Read the article

  • JBOSS 7 encoding not working as expected

    - by Fofole
    I had problems with my listgrids not showing diacritcs corectly and I found out that when I inserted from java into the db the values where already bugged. A post here helped and I changed my project properties - Text encoding - other - UTF-8 and this fixed my problem. Thing is this only fixes my problem locally. What I need to do is on my Jboss server also set the encoding somehow. This is what I put in my configuration file: <?xml version='1.0' encoding='UTF-8'?> <server name="vali-ubuntu" xmlns="urn:jboss:domain:1.0"> extensions> extension module="org.jboss.as.clustering.infinispan"/> extension module="org.jboss.as.connector"/> extension module="org.jboss.as.deployment-scanner"/> extension module="org.jboss.as.ee"/> extension module="org.jboss.as.ejb3"/> extension module="org.jboss.as.jaxrs"/> extension module="org.jboss.as.jmx"/> extension module="org.jboss.as.logging"/> extension module="org.jboss.as.naming"/> extension module="org.jboss.as.osgi"/> extension module="org.jboss.as.remoting"/> extension module="org.jboss.as.sar"/> extension module="org.jboss.as.security"/> extension module="org.jboss.as.threads"/> extension module="org.jboss.as.transactions"/> extension module="org.jboss.as.web"/> extension module="org.jboss.as.weld"/> /extensions> system-properties> property name="org.apache.catalina.connector.URI_ENCODING" value="UTF-8"/> property name="org.apache.catalina.connector.USE_BODY_ENCODING_FOR_QUERY_STRING" value="tru e"/> /system-properties> //..... This doesn't work so maybe I need to add something else. I tried everything I could find with no succes so any help is appreciated. Thanks. EDIT:From what I read, this will work only in jboss 7.1.0 beta 1 or highier. (URIEncoding) and I use JBoss 7.0.2 so I need a replacement for 7.0.2

    Read the article

  • Having encoding problems in Aptana Studio

    - by keune
    A few months ago, I was working on a PHP project in Aptana Studio. It was version 1.5 or something. Later I installed Aptana 2.0 and created a new project with the same files. Back then it was UTF-8 so I chose UTF-8 for the project's text file encoding. When I make changes in any PHP file using Aptana, it gives the error: Warning: Cannot modify header information - headers already sent... I know it's a problem related to encoding. What can I do?

    Read the article

  • Wrong file encoding after Dist::Zilla

    - by xenoterracide
    How can I get mojibake to pass? this might be a bug in the contributors plugin. The character does not render correctly in perldoc, but does in my vim and in the extracted git log. # Failed test 'Mojibake test for blib/lib/Pod/Spell.pm' # at /home/xenoterracide/perl5/perlbrew/perls/perl-5.18.1/lib/site_perl/5.18.1/Test/Mojibake.pm line 168. # Non-UTF-8 unexpected in blib/lib/Pod/Spell.pm, line 431 (POD) here's a snippet from the source which should probably be looked at directly due to copy-paste maybe not catching an encoding issue. =item * Olivier Mengué <[email protected]> =back A little more vim exploration shows that :set filencoding is being changed to latin1 editing the file in vim seems to fix this, but since the file is being generated, how can I get it generated with the correct encoding?

    Read the article

  • oralce + java encoding problem while insert

    - by Ahmad
    hi, I am kind of stuck on this one. im not a java or oracle guru, so please give detailed answers :) i've a web-service that inserts something into DB. the web-service is hosted on axis. the db is oracle with following properties: NLS_LANGUAGE AMERICAN NLS_TERRITORY AMERICA NLS_CHARACTERSET ZHS16GBK the web-service is hosted on windows server 2008, english version but i have changed the locale of the system to chinese now the data after insert has encoding problem and shows strange characters like ????,exxk?? the jws file has GBK encoding. and the data that is inserted into the DB is hard-coded in the file [we are not reading it from REQUEST]

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >