Search Results

Search found 4604 results on 185 pages for 'utf 8'.

Page 23/185 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • AWS RDS Mysql with benstalk Hibernate app: Character encoding issue

    - by TeraTon
    I'm running a webapp from amazon rds with tomcat 7 and spring, which uses hibernate as the persistence layer. The application and utf-8 encoding work properly on localhost, but for some reason when I deploy to amazon, the UTF-8 encoding breaks. I use mysql 5.5.27 on amazon rds and the table that we wish to update has collation set to utf8 - utf8_unicode_ci And in hibernate I have set: < prop key="hibernate.connection.charSet"UTF-8 UTF-8 characters get replaced by ??? and this is of course especially bad for passwords and usernames + email as it basically kills them. Anyone else encountered character encoding breaking when deploying to amazon?

    Read the article

  • Typical text encoding+BOM, and EOL behavior on mobile devices

    - by Dan W
    Typical things to worry about when dealing with text are the BOM/signature, encoding, and the end of line (EOL) char/chars. I know that Windows often favours \r\n (CR+LF) and Mac/Linux favours \n (LF), but how about mobile devices such as the iPhone and Android? Do typical apps on those platforms favour one or the other? Also, which text encodings are mobiles most likely to use - UTF-8, iso-8859-1, or even Windows 1252 (or other default codepage) or maybe even UTF-16? And if they use UTF-8/16, are they likely to need (or require not having) a BOM/signature? What is the typical behavior here?

    Read the article

  • EPM 11.1.2 - Issues during configuration when using Oracle DB if not using UTF8

    - by Ahmed A
    If you see issues during configuration when using Oracle DB if not using UTF8: Workaround: a. During configuration of EPM products, a warning message is displayed if the Oracle DB is not UTF8 enabled. If you continue with the configuration, certain products will not work as they will not be able to read the contents in the tables as the format is wrong.b. The Oracle DB must be setup to use AL32UTF8 or a superset that contains AL32UTF8. c. The only difference between AL32UTF8 and UTF8 character sets is that AL32UTF8 stores characters beyond U+FFFF as four bytes (exactly as Unicode defines UTF-8). Oracle’s “UTF8” stores these characters as a sequence of two UTF-16 surrogate characters encoded using UTF-8 (or six bytes per character). Besides this storage difference, another difference is better support for supplementary characters in AL32UTF8 character set.

    Read the article

  • What issues lead people to use Japanese-specific encodings rather than Unicode?

    - by Nicolas Raoul
    At work I come across a lot of Japanese text files in Shift-JIS and other encodings. It causes many mojibake (unreadable character) problems for all computer users. Unicode was intended to solve this sort of problem by defining a single character set for all languages, and the UTF-8 serialization is recommended for use on the Internet. So why doesn't everybody switch from Japanese-specific encodings to UTF-8? What issues with or disadvantages of UTF-8 are holding people back? EDIT: The W3C lists some known problems with Unicode, could this be a reason too?

    Read the article

  • How to auto mount partition on startup xubuntu?

    - by Bas
    I want to auto mount all my partitions on startup. But i can't get it to work as i'm new to xubuntu. This is how my fstab file looks like: proc /proc proc nodev,noexec,nosuid 0 0 #Entry for /dev/sda1 : UUID=3bf842ea-923b-43fe-b5f9-066fc920aaec / ext4 errors=remount-$ #Entry for /dev/sdb1 : UUID=F0C859BDC8598330 /media/Bas ntfs-3g defaults,locale=en_US.UTF-8 $ #Entry for /dev/sda3 : UUID=146213A76D02F7AD /media/sda3 ntfs-3g defaults,locale=en_US.UTF-8 $ UUID=1C33A98704D941F1 /media/sda3 ntfs-3g defaults,locale=en_US.UTF-8 $ #Entry for /dev/sda5 : UUID=ba48c631-5652-4ce7-85a3-bda96b353ca7 none swap sw 0 $ The last line I added myself but it's not working. I spend the whole day trying to figure this out without success. So can anyone help me with this? Thanks in advance

    Read the article

  • Why can't my jsp page read chinese chars from mysql? [migrated]

    - by Canking
    The mysql chars is utf-8, and the jsp page is also set to utf-8. I use the method: DriverManager.getConnection("jdbc:mysql://localhost:3306/jsptest?"+"useUnicode=true&characterEncoding=UTF-8","root",""); But it can not be use. When I insert Chinese chars into mysql and select it out, that would be proper functioning. The question is when I select some Chinese chars that I write into mysql at first, it would be all the "?" at the Chinese char place! Please watch the picture:

    Read the article

  • How to do proper Unicode and ANSI output redirection on cmd.exe?

    - by Sorin Sbarnea
    If you are doing automation on windows and you are redirecting the output of different commands (internal cmd.exe or external, you'll discover that your log files contains combined Unicode and ANSI output (meaning that they are invalid and will not load well in viewers/editors). Is it is possible to make cmd.exe work with UTF-8? This question is not about display, s about stdin/stdout/stderr redirection and Unicode. I am looking for a solution that would allow you to: redirect the output of the internal commands to a file using UTF-8 redirect output of external commands supporting Unicode to the files but encoded as UTF-8. If it is impossible to obtain this kind of consistence using batch files, is there another way of solving this problem, like using python scripting for this? In this case, I would like to know if it is possible to do the Unicode detection alone (user using the scripting should not remember if the called tools will output Unicode or not, it will just expect to convert the output to UTF-8. For simplicity we'll assume that if the tool output is not-Unicode it will be considered as UTF-8 (no codepage conversion).

    Read the article

  • mysql encoding problem

    - by Syom
    i have a proble, when insert something in foreign language into database. i have set the collation of database to utf8_general_ci(try utf8_unicod_ci too). but when i insert some text into table, it was saved like this Õ€Õ¡ÕµÕ¥Ö€Õ¥Õ¶ Ô±Õ¶Õ¸Ö‚Õ¶ but when i read from database, text shows in correct form. it looks like that only in database. i have set encoding in my html document to charset=UTF-8 <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> and i set mysql_query("SET NAMES UTF-8"); mysql_query("SET CHARACTER SET UTF-8"); when conecting to database. so i think that i' ve done everything, but it still save in that anknown format. could you help me. thanks in advance

    Read the article

  • how can i convert a .tpl file to a .php file? [closed]

    - by kim
    What do I do?? I am building a site and there is a categories.tpl that I want to go where sitemap.php is. sorry i am brand new to all this. let me try to be more clear.id show you a picture but it is marking it as spam. i have a menu at the top of my site like with any retail site. [About Cart Account and Products]. when you click products it takes to you the sitemap.php file. however i need the content from the categories.tpl to appear instead. (Categories in prestashop is another way of saying products) here is the categories.tpl code: {include file=$tpl_dir./breadcrumb.tpl} {include file=$tpl_dir./errors.tpl} {if $category-id AND $category-active} {$category-name|escape:'htmlall':'UTF-8'} {$nb_products|intval} {if $nb_products1}{l s='products'}{else}{l s='product'}{/if} {if $scenes} <!-- Scenes --> {include file=$tpl_dir./scenes.tpl scenes=$scenes} {else} <!-- Category image --> {if $category->id_image} <img src="{$link->getCatImageLink($category->link_rewrite, $category->id_image, 'category')}" alt="{$category->name|escape:'htmlall':'UTF-8'}" title="{$category->name|escape:'htmlall':'UTF-8'}" id="categoryImage" /> {/if} {/if} {if $category->description} <div class="cat_desc">{$category->description}</div> {/if} {if isset($subcategories)} <!-- Subcategories --> <div id="subcategories"> <h3>{l s='Subcategories'}</h3> <ul class="inline_list"> {foreach from=$subcategories item=subcategory} <li> <a href="{$link->getCategoryLink($subcategory.id_category, $subcategory.link_rewrite)|escape:'htmlall':'UTF-8'}" title="{$subcategory.name|escape:'htmlall':'UTF-8'}"> {if $subcategory.id_image} <img src="{$link->getCatImageLink($subcategory.link_rewrite, $subcategory.id_image, 'medium')}" alt="" /> {else} <img src="{$img_cat_dir}default-medium.jpg" alt="" /> {/if} </a> <br /> <a href="{$link->getCategoryLink($subcategory.id_category, $subcategory.link_rewrite)|escape:'htmlall':'UTF-8'}">{$subcategory.name|escape:'htmlall':'UTF-8'}</a> </li> {/foreach} </ul> <br class="clear"/> </div> {/if} {if $products} {include file=$tpl_dir./product-sort.tpl} {include file=$tpl_dir./product-list.tpl products=$products} {include file=$tpl_dir./pagination.tpl} {elseif !isset($subcategories)} <p class="warning">{l s='There is no product in this category.'}</p> {/if} {elseif $category-id} {l s='This category is currently unavailable.'} {/if} and here is the sitemap.php include(dirname(FILE).'/config/config.inc.php'); include(dirname(FILE).'/header.php'); include(dirname(FILE).'/product-sort.php'); $nbProducts = intval(Product::getNewProducts(intval($cookie-id_lang), isset($p) ? intval($p) - 1 : NULL, isset($n) ? intval($n) : NULL, true)); include(dirname(FILE).'/pagination.php'); $smarty-assign(array( 'products' = Product::getNewProducts(intval($cookie-id_lang), intval($p) - 1, intval($n), false, $orderBy, $orderWay), 'nbProducts' = intval($nbProducts))); $smarty-display(_PS_THEME_DIR_.'new-products.tpl'); include(dirname(FILE).'/footer.php'); ?

    Read the article

  • Decoding not reversing unicode encoding in Django/Python

    - by PhilGo20
    Ok, I have a hardcoded string I declare like this name = u"Par Catégorie" I have a # -- coding: utf-8 -- magic header, so I am guessing it's converted to utf-8 Down the road it's outputted to xml through xml_output.toprettyxml(indent='....', encoding='utf-8') And I get a UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 3: ordinal not in range(128) Most of my data is in French and is ouputted correctly in CDATA nodes, but that one harcoded string keep ... I don't see why an ascii codec is called. what's wrong ?

    Read the article

  • Sinatra / Rack fails with non-ascii characters in url

    - by Piotr Zolnierek
    I am getting Encoding::UndefinedConversionError at /find/Wroclaw "\xC5" from ASCII-8BIT to UTF-8 For some mysterious reason sinatra is passing the string as ASCII instead of UTF-8 as it should. I have found some kind of ugly workaround... I don't know why Rack assumes the encoding is ASCII-8BIT ... anyway, a way is to use string.force_encoding("UTF-8")... but doing this for all params is tedious

    Read the article

  • not sure if this is my mistake ~~ vs2010 Add Service Reference fails

    - by gerryLowry
    https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter the above is from Taleo's API guide. I'm trying to create a WCF Client (e.g.: " Creating Your First WCF Client" http://channel9.msdn.com/shows/Endpoint/Endpoint-Screencasts-Creating-Your-First-WCF-Client/ ) The tbe.taleo... link is from Taleo's API documentation. Likely my understanding is flawed. My assumption is that when the link from Taleo is entered into the vs2010 "Add Service Reference" dialog and GO is clicked, then vs2010 should retrieve a proper WSDL/SOAP envelope back from the Taleo link. That does not happen; instead an error occurs. Fiddler2 (http://fiddler2.com) displays the status code 500 "HTTP/1.1 500 Internal Server Error". [FULL DETAILS BELOW] "WcfTestClient.exe" gives a similar error: [WcfTestClient DETAILS BELOW] QUESTION: is it me, or is the Taleo link flawed? Thank you, Gerry [FULL DETAILS "Add Service Reference"] The HTML document does not contain Web service discovery information. Metadata contains a reference that cannot be resolved: 'https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter'. The content type text/xml;charset=utf-8 of the response message does not match the content type of the binding (application/soap+xml; charset=utf-8). If using a custom encoder, be sure that the IsContentTypeSupported method is implemented properly. The first 544 bytes of the response were: ' SOAP-ENV:Protocol Unsupported content type "application/soap+xml; charset=utf-8", must be: "text/xml". /MANAGER/dispatcher/servlet/rpcrouter '. The remote server returned an error: (500) Internal Server Error. If the service is defined in the current solution, try building the solution and adding the service reference again. [WcfTestClient DETAILS] Error: Cannot obtain Metadata from https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at http://go.microsoft.com/fwlink/?LinkId=65455.WS-Metadata Exchange Error URI: https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter Metadata contains a reference that cannot be resolved: 'https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter'. The content type text/xml;charset=utf-8 of the response message does not match the content type of the binding (application/soap+xml; charset=utf-8). If using a custom encoder, be sure that the IsContentTypeSupported method is implemented properly. The first 544 bytes of the response were: 'SOAP-ENV:ProtocolUnsupported content type "application/soap+xml; charset=utf-8", must be: "text/xml"./MANAGER/dispatcher/servlet/rpcrouter'. The remote server returned an error: (500) Internal Server Error.HTTP GET Error URI: https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter The HTML document does not contain Web service discovery information.

    Read the article

  • is there a way to generate pdf containing non-ascii symbols with pisa from django template?

    - by mihailt
    Hi. i'm trying to generate a pdf from template using this snippet: def write_pdf(template_src, context_dict): template = get_template(template_src) context = Context(context_dict) html = template.render(context) result = StringIO.StringIO() pdf = pisa.pisaDocument(StringIO.StringIO(html.encode("UTF-8")), result) if not pdf.err: return http.HttpResponse(result.getvalue(), mimetype='application/pdf') except Exception('PDF error') but all non-latin symbols are not showing correctly, the template and view are saved using utf-8 encoding. i've tried saving view as ANSI and then to user unicode(html,"UTF-8"), but it throws TypeError. Also i thought that maybe it's because the default fonts somehow do not support utf-8 so according to pisa documentation i tried to set fontface in template body in style section. that still gave no results. Does any one have some ideas how to solve this issue?

    Read the article

  • No norwegian characters in LaTeX

    - by DreamCodeR
    Hi, I have translated a document from English to Norwegian in the LaTeX format, and while using norwegian special characters, I get an error using \usepackage[utf8x]{inputenc} to try and display the norwegian (scandinavian) special characters in PostScript/PDF/DVI format, saying Package utf8x Error: MalformedUTF-8sequence. So while that didn't work, I tried out another possible solution: \usepackage{ucs} \usepackage[norsk]babel And when I tried to save that in Emacs I get this message: These default coding systems were tried to encode text in the buffer `lol.tex': (utf-8-unix (905 . 4194277) (916 . 4194245) (945 . 4194278) (950 . 4194277) (954 . 4194296) (990 . 4194277) (1010 . 4194277) (1013 . 4194278) (1051 . 4194277) (1078 . 4194296) (1105 . 4194296)) However, each of them encountered characters it couldn't encode: utf-8-unix cannot encode these: \345 \305 \346 \345 \370 \345 \345 \346 \345 \370 ... Thanks to Emacs I have the possibility to check out the properties of those characters and the first one tells me: character: \345 (4194277, #o17777745, #x3fffe5) preferred charset: eight-bit (Raw bytes 128-255) code point: 0xE5 syntax: w which means: word buffer code: #xE5 file code: not encodable by coding system utf-8-unix display: not encodable for terminal Which doesn't tell me much. When I try to build this with texi2dvi --dvipdf filename.text I get a perfectly fine PDF, all without the special norwegian characters. When I am about to save Emacs also ask me: "Select coding system (default raw-text):" And I type in utf-8 to choose its coding system. I have also tried to choose default raw-text to see if I get some different result. But nothing. At last I tried \lstset{inputencoding=utf8x, extendedchars=\true} ... a code I came over while trying to google the solution to this problem. Which gives me this error: Undefined control sequence. So basically, I have tried every encoding option I have been able to find and nothing works. I am desperately trying to make this work since the norwegian translation must be published before the deadline. As an additional information I may add that I found out later on that I only had the en_US.UTF-8 in my locale, so I added nb_NO.UTF-8 and nb_NO.ISO-8859-15 and ran locale-gen + reboot without any changes. I hope I provided enough information to get some assistance, the characters in question is æ ø å.

    Read the article

  • IWebBrowser: How to specify the encoding when loading html from a stream?

    - by Ian Boyd
    Using the concepts from the sample code provided by Microsoft for loading HTML content into an IWebBrowser from an IStream using the web browser's IPersistStreamInit interface: HRESULT LoadWebBrowserFromStream(IWebBrowser* pWebBrowser, IStream* pStream) { [snip] } How can one specify the encoding of the html inside the IStream? The IStream will contain a series of bytes, but the problem is what do those bytes represent? They could, for example, contain bytes where: each byte represents a character from the current Windows code-page (e.g. 1252) each byte could represent a character from the ISO-8859-1 character set the bytes could represent UTF-8 encoded characters every 2 bytes could represent a character, using UTF-16 encoding In my particular case, i am providing the IWebBrowser an IStream that contains a series of double-bytes characters (UTF-16), but the browser (incorrectly) believes that UTF-8 encoding is in effect. This results in garbled characters.

    Read the article

  • How do I write a std::codecvt facet?

    - by Billy ONeal
    How do I write a std::codecvt facet? I'd like to write ones that go from UTF-16 to UTF-8, which go from UTF-16 to the systems current code page (windows, so CP_ACP), and to the system's OEM codepage (windows, so CP_OEM). Cross-platform is preferred, but MSVC on Windows is fine too. Are there any kinds of tutorials or anything of that nature on how to correctly use this class?

    Read the article

  • Freemarker - lack of special character in email subject template cause email content crash

    - by freakman
    im fighting with strange error. Im using seperate freemarker templates for mail subject and body. It is sent using org.springframework.mail.javamail.JavaMailSender. Only templates that contains some special swedish character works in my application ( yes you read right... not the other way). If I delete it my email content crashes. It contains then: MIME-Version: 1.0 Content-Type: text/html;charset=UTF-8 Content-Transfer-Encoding: 7bit .. html code here .. My freemarker.properties file locale=sv_SE classic_compatible=false number_format= date_format=yyyy-MM-dd time_format=HH:mm datetime_format=yyyy-MM-dd HH:mm output_encoding=UTF-8 url_escaping_charset=UTF-8 auto_import=spring.ftl as spring auto_include= default_encoding=UTF-8 localized_lookup=true strict_syntax=true whitespace_stripping=true template_update_delay=10 Ive tried to convert subject file with dos2unix tool. Using 'find -bi subject.ftl' show that encoding is us-ascii. With added special character - utf-8. This thing is suprisingly strange for me... //SOLUTION: use :set bomb and save file in vim.

    Read the article

  • File Encoding handling in Eclipse 3.5

    - by Cédric Girard
    Hi, I use Eclipse 3.5 on Windows, with PDT and Subclipse plugins, with both legacy projects using ISO-8859-1 encoding (latin-1), and newers ones wich use UTF-8. I configured my workspace to use UTF-8, and I configured old projects to use latin-1. But every time I open an old project, it use UTF-8. With a workspace using latin-1 by default, I have the same problem with utf-8 projects edited as iso-8859-1. My encoding choice is written in the file .settings/org.eclipse.core.resources.prefs but seems to be never read. The only solution for now is to have a latin1 workspace, and an utf8 one. Any better idea? Regards, Cédric

    Read the article

  • python UTF16LE file to UTF8 encoding

    - by Qiao
    I have big file with utf16le (BOM) encoding. Is it possible to convert it to usual UTF8 by python? Something like file_old = open('old.txt', mode='r', encoding='utf_16_le') file_new = open('new.txt', mode='w', encoding='utf-8') text = file_old.read() file_new.write(text.encode('utf-8')) http://docs.python.org/release/2.3/lib/node126.html (-- utf_16_le UTF-16LE) Not working. Can't understand "TypeError: must be str, not bytes" error. python 3

    Read the article

  • Send javax.mail.internet.MimeMessage to a recipient with non-ASCII name?

    - by phyzome
    I am writing a piece of Java code that needs to send mail to users with non-ASCII names. I have figured out how to use UTF-8 for the body, subject line, and generic headers, but I am still stuck on the recipients. Here's what I'd like in the "To:" field: "????????????" <[email protected]>. This lives (for our purposes today) in a String called recip. msg.addRecipients(MimeMessage.RecipientType.TO, recip) gives "?????S]" <[email protected]> msg.addHeader("To", MimeUtility.encodeText(recip, "utf-8", "B")) throws AddressException: Local address contains control or whitespace in string ``=?utf-8?B?IuOCpuOCo+OCreODmuODh+OCo+OCouOBq+OCiOOBhuOBk+OBnSIgPA==?= =?utf-8?B?Zm9vQGV4YW1wbGUuY29tPg==?='' How the heck am I supposed to send this message?

    Read the article

  • Losing session after Login - Java

    - by Patrick Villela
    I'm building an application that needs to login to a certain page and make a navigation. I can login, provided that the response contains a string that identifies it. But, when I navigate to the second page, I can't see the page as a logged user, only as anonymous. I'll provide my code. import java.net.*; import java.security.*; import java.security.cert.*; import javax.net.ssl.*; import java.io.*; import java.util.*; public class PostTest { static HttpsURLConnection conn = null; private static class DefaultTrustManager implements X509TrustManager { @Override public void checkClientTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {} @Override public void checkServerTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {} @Override public X509Certificate[] getAcceptedIssuers() { return null; } } public static void main(String[] args) { try { SSLContext ctx = SSLContext.getInstance("TLS"); ctx.init(new KeyManager[0], new TrustManager[] {new DefaultTrustManager()}, new SecureRandom()); SSLContext.setDefault(ctx); String data = URLEncoder.encode("txtUserName", "UTF-8") + "=" + URLEncoder.encode(/*username*/, "UTF-8"); data += "&" + URLEncoder.encode("txtPassword", "UTF-8") + "=" + URLEncoder.encode(/*password*/", "UTF-8"); data += "&" + URLEncoder.encode("envia", "UTF-8") + "=" + URLEncoder.encode("1", "UTF-8"); connectToSSL(/*login url*/); conn.setDoOutput(true); OutputStreamWriter wr = new OutputStreamWriter(conn.getOutputStream()); wr.write(data); wr.flush(); BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream())); String line; String resposta = ""; while((line = rd.readLine()) != null) { resposta += line + "\n"; } System.out.println("valid login -> " + resposta.contains(/*string that assures me I'm looged in*/)); connectToSSL(/*first navigation page*/); rd = new BufferedReader(new InputStreamReader(conn.getInputStream())); while((line = rd.readLine()) != null) { System.out.println(line); } } catch(Exception e) { e.printStackTrace(); } } private static void connectToSSL(String address) { try { URL url = new URL(address); conn = (HttpsURLConnection) url.openConnection(); conn.setHostnameVerifier(new HostnameVerifier() { @Override public boolean verify(String arg0, SSLSession arg1) { return true; } }); } catch(Exception ex) { ex.printStackTrace(); } } } Any further information, just ask. Thanks in advance.

    Read the article

  • How to enable reading non-ascii characters in Servlets

    - by Daziplqa
    How to make the servlet accept non-ascii (Arabian, chines, etc) characters passed from JSPs? I've tried to add the following to top of JSPs: <%@page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> And to add the following in each post/get method in the servlet: request.setCharacterEncoding("UTF-8"); response.setCharacterEncoding("UTF-8"); I've tried to add a Filter that executes the above two statements instead of in the servlet. To be quite honest, these was working in the past, but now it doesn't work anymore. I am using tomcat 5.0.28/6.x.x on JDK1.6 on both Win & Linux boxes.

    Read the article

  • Special characters incongruence

    - by Enrique
    Hello I'm building a Spring MVC web application that runs on Tomcat 6.0.20 and JDK 1.6.0_19. When I send some special characters through an HTML form some of them are stored as question marks ? For example these symbols are stored correctly: €, á, é, í, ‰, etc But some symbols are replaced with ? like: £, ?, ? MySQL tables charset is utf-8. My jsp also use utf-8 <%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8" %> I have included org.springframework.web.filter.CharacterEncodingFilter in web.xml as suggested here When I debug the POST request when sending 3 characters €a£ with firebug I get: %E2%82%ACa%E2%82%A4 which is correct since E2 82 AC is the code for € and E2 82 A4 is the code for £ but £ is stored as ? in the database. When I save £ directly into the database it is displayed correctly in the webpage. How can I fix this?

    Read the article

  • In the JSON spec, what does "Since the first two characters of a JSON text will always be ASCII characters" mean?

    - by dan gibson
    The spec is http://www.ietf.org/rfc/rfc4627.txt?number=4627 It contains this: Encoding JSON text SHALL be encoded in Unicode. The default encoding is UTF-8. Since the first two characters of a JSON text will always be ASCII characters [RFC0020], it is possible to determine whether an octet stream is UTF-8, UTF-16 (BE or LE), or UTF-32 (BE or LE) by looking at the pattern of nulls in the first four octets. What does it mean "Since the first two characters of a JSON text will always be ASCII characters [RFC0020]"? I've looked at RFC0020 but couldn't find anything about it. JSON could be {" or { " (ie whitespace before the quote.

    Read the article

  • How to do i18n and create Windows Installer of Haskell programs?

    - by Aufheben
    I'm considering using Haskell to develop for a little commercial project. The program must be internationalized (to Simplified Chinese, to be specific), and my customer requests that it should be delivered in a one-click Windows Installer form. So basically these are the two problems I'm facing now: I18n of Haskell programs: the method described in Internationalization of Haskell programs did work (partially) if I change the command of executing the program from LOCALE=zh_CN.UTF-8 ./Main to LANG=zh_CN.UTF-8 ./Main (I'm working on Ubuntu 10.10), however, the Chinese output is garbled, and I've no idea why is that. Distribution on Windows: I'm used to work under Linux and build & package my Haskell programs using Cabal, but what is the most natural way to create a one-click Windows Installer from a cabalized Haskell package? Is the package bamse the right way to go? ------ Details for the first problem ------ What I did was: $ hgettext -k __ -o messages.pot Main.hs $ msginit --input=messages.pot --locale=zh_CN.UTF-8 (Edit the zh_CN.po file, adding Chinese translation) $ mkdir -p zh_CN/LC_MESSAGES $ msgfmt --output-file=zh_CN/LC_MESSAGES/hello.mo zh_CN.po $ ghc --make Main.hs $ LANG=zh_CN.UTF-8 ./Main And the output was like: This indicates gettext is actually working, but for some reason the generated zh_CN.mo file is broken (my guess). I'm pretty sure my zh_CN.po file is encoded in UTF-8. Plus, aside from using System.IO.putStrLn, I also tried System.IO.UTF8.putStrLn to output the string, which didn't work either.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >