Search Results

Search found 4604 results on 185 pages for 'utf 8'.

Page 16/185 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • win32 read java preference from c++ code

    - by Jayan
    One of our program writes program information(window title, memory etc) in Java Preferences. On windows this is available under registry. How can I read the values written by Java program using c (or c++). Looks like API I should use is RegGetValue. Is this guaranteed to work on Windows XP 32 bit? The String written by java is UTF-8 encoded. How do I read such strings in windows (win32 or vc++) Cheers, Jayan

    Read the article

  • PHP utf8 encoding problem

    - by shyam
    How can I encode strings on UTF-16BE format in PHP? For "Demo Message!!!" the encoded string should be '00440065006D006F0020004D00650073007300610067006'. Also, I need to encode Arabic characters to this format.

    Read the article

  • ForceType text/html;charset=utf-8 in .htaccess is causing all externally linked CSS to stop renderin

    - by chimerical
    I'm using: ForceType text/html;charset=utf-8 in my .htaccess file, but it's causing all externally linked CSS to stop rendering on their respective pages. So something like this: <link rel="stylesheet" type="text/css" href="somestyles.css" media="all" /> no longer works on the page using it. I've also been trying combinations of: AddCharset utf-8 .html AddCharset utf-8 .htm AddCharset utf-8 .css AddCharset utf-8 .js ForceType text/html;charset=utf-8 ForceType text/css;charset=utf-8 but no luck so far. Does anyone know what's wrong?

    Read the article

  • JAX-WS errors when SOAP body contains UTF-8 BOM

    - by Vinny Carpenter
    I have developed a Web Service using JAX-WS (v2.1.3 - Sun JDK 1.6.0_05) deployed on WebLogic 10.3 that works just fine when I use a Java client or SoapUI or other Web Services testing tools. I need to consume this service using 2005 Microsoft SQL Server Reporting Services and I get the following error Couldn't create SOAP message due to exception: XML reader error: unexpected character content SEVERE: Couldn't create SOAP message due to exception: XML reader error: unexpected character content: "?" com.sun.xml.ws.protocol.soap.MessageCreationException: Couldn't create SOAP message due to exception: XML reader error: unexpected character content: "?" at com.sun.xml.ws.encoding.SOAPBindingCodec.decode(SOAPBindingCodec.java:292) at com.sun.xml.ws.transport.http.HttpAdapter.decodePacket(HttpAdapter.java:276) at com.sun.xml.ws.transport.http.HttpAdapter.access$500(HttpAdapter.java:93) at com.sun.xml.ws.transport.http.HttpAdapter$HttpToolkit.handle(HttpAdapter.java:432) at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:244) at com.sun.xml.ws.transport.http.servlet.ServletAdapter.handle(ServletAdapter.java:134) at com.sun.xml.ws.transport.http.servlet.WSServletDelegate.doGet(WSServletDelegate.java:129) at com.sun.xml.ws.transport.http.servlet.WSServletDelegate.doPost(WSServletDelegate.java:160) at com.sun.xml.ws.transport.http.servlet.WSServlet.doPost(WSServlet.java:75) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3498) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(Unknown Source) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) Caused by: com.sun.xml.ws.streaming.XMLStreamReaderException: XML reader error: unexpected character content: "?" at com.sun.xml.ws.streaming.XMLStreamReaderUtil.nextElementContent(XMLStreamReaderUtil.java:102) at com.sun.xml.ws.encoding.StreamSOAPCodec.decode(StreamSOAPCodec.java:174) at com.sun.xml.ws.encoding.StreamSOAPCodec.decode(StreamSOAPCodec.java:296) at com.sun.xml.ws.encoding.StreamSOAPCodec.decode(StreamSOAPCodec.java:128) at com.sun.xml.ws.encoding.SOAPBindingCodec.decode(SOAPBindingCodec.java:287) ... 22 more If I use a HTTP proxy to sniff out what SSRS is sending to JAX-WS, I see EF BB BF as the beginning of the post body and JAX-WS doesn't like that. If I remove the special characters and resubmit the request using Fiddler, then the web-service invocation works. Why does JAX-WS blow up with the standard UTF-8 BOM? Is there a workaround to get past this issue? Any suggestions would be greatly appreciated. Thanks --Vinny

    Read the article

  • Convert non-breaking spaces to spaces in Ruby

    - by CoolAJ86
    I have cases where user-entered data from an html textarea or input is sometimes sent with \u00a0 (non-breaking spaces) instead of spaces when encoded as utf-8 json. I believe that to be a bug in Firefox, as I know that the user isn't intentionally putting in non-breaking spaces instead of spaces. There are also two bugs in Ruby, one of which can be used to combat the other. For whatever reason \s doesn't match \u00a0 However [^[:print:]] (which definitely should not match) and \xC2\xA0 both will match, but I consider those to be less-than-ideal ways to deal with the issue. Are there other recommendations for getting around this issue?

    Read the article

  • std::basic_stringstream<unsigned char> won't compile with MSVC 10

    - by Michael J
    I'm trying to get UTF-8 chars to co-exist with ANSI 8-bit chars. My strategy has been to represent utf-8 chars as unsigned char so that appropriate overloads of functions can be used for the two character types. e.g. namespace MyStuff { typedef uchar utf8_t; typedef std::basic_string<utf8_t> U8string; } void SomeFunc(std::string &s); void SomeFunc(std::wstring &s); void SomeFunc(MyStuff::U8string &s); This all works pretty well until I try to use a stringstream. std::basic_ostringstream<MyStuff::utf8_t> ostr; ostr << 1; MSVC Visual C++ Express V10 won't compile this: c:\program files\microsoft visual studio 10.0\vc\include\xlocmon(213): warning C4273: 'id' : inconsistent dll linkage c:\program files\microsoft visual studio 10.0\vc\include\xlocnum(65) : see previous definition of 'public: static std::locale::id std::numpunct<unsigned char>::id' c:\program files\microsoft visual studio 10.0\vc\include\xlocnum(65) : while compiling class template static data member 'std::locale::id std::numpunct<_Elem>::id' with [ _Elem=Tk::utf8_t ] c:\program files\microsoft visual studio 10.0\vc\include\xlocnum(1149) : see reference to function template instantiation 'const _Facet &std::use_facet<std::numpunct<_Elem>>(const std::locale &)' being compiled with [ _Facet=std::numpunct<Tk::utf8_t>, _Elem=Tk::utf8_t ] c:\program files\microsoft visual studio 10.0\vc\include\xlocnum(1143) : while compiling class template member function 'std::ostreambuf_iterator<_Elem,_Traits> std::num_put<_Elem,_OutIt>:: do_put(_OutIt,std::ios_base &,_Elem,std::_Bool) const' with [ _Elem=Tk::utf8_t, _Traits=std::char_traits<Tk::utf8_t>, _OutIt=std::ostreambuf_iterator<Tk::utf8_t,std::char_traits<Tk::utf8_t>> ] c:\program files\microsoft visual studio 10.0\vc\include\ostream(295) : see reference to class template instantiation 'std::num_put<_Elem,_OutIt>' being compiled with [ _Elem=Tk::utf8_t, _OutIt=std::ostreambuf_iterator<Tk::utf8_t,std::char_traits<Tk::utf8_t>> ] c:\program files\microsoft visual studio 10.0\vc\include\ostream(281) : while compiling class template member function 'std::basic_ostream<_Elem,_Traits> & std::basic_ostream<_Elem,_Traits>::operator <<(int)' with [ _Elem=Tk::utf8_t, _Traits=std::char_traits<Tk::utf8_t> ] c:\program files\microsoft visual studio 10.0\vc\include\sstream(526) : see reference to class template instantiation 'std::basic_ostream<_Elem,_Traits>' being compiled with [ _Elem=Tk::utf8_t, _Traits=std::char_traits<Tk::utf8_t> ] c:\users\michael\dvl\tmp\console\console.cpp(23) : see reference to class template instantiation 'std::basic_ostringstream<_Elem,_Traits,_Alloc>' being compiled with [ _Elem=Tk::utf8_t, _Traits=std::char_traits<Tk::utf8_t>, _Alloc=std::allocator<uchar> ] . c:\program files\microsoft visual studio 10.0\vc\include\xlocmon(213): error C2491: 'std::numpunct<_Elem>::id' : definition of dllimport static data member not allowed with [ _Elem=Tk::utf8_t ] Any ideas? ** Edited 19 June 2012 ** OK, I've gotten closer to understanding this, but not how to solve it. As we all know, static class variables get defined twice: once in the class definition and once outside the class definition which establishes storage space. e.g. // in .h file class CFoo { // ... static int x; }; // in .cpp file int CFoo::x = 42; Now in the VC10 headers we get something like this: template<class _Elem> class numpunct : public locale::facet { // ... _CRTIMP2_PURE static locale::id id; // ... } When the header is included in an application, _CRTIMP2_PURE is defined as __declspec(dllimport), which means that the variable is imported from a dll. Now the header also contains the following template<class _Elem> locale::id numpunct<_Elem>::id; Note the absence of the __declspec(dllimport) qualifier. i.e. The class declaration says that the static linkage of the id variable is in the dll, but for the general case, it gets declared outside the dll. For the known cases, there are specialisations. template locale::id numpunct<char>::id; template locale::id numpunct<wchar_t>::id; These are protected by #ifs so that they are only included when building the DLL. They are excluded otherwise. i.e. the char and wchar_t versions of numpunct ARE inside the dll So we have the class definition saying that id's storage is in the DLL, but that is only true for the char and wchar_t specialisations, meaning that my unsigned char version is doomed. :-( The only way forward that I can think of is to create my own specialisation: basically copying it from the header file and fixing it. This raises many issues. Anybody have a better idea?

    Read the article

  • Unicode characters in URLs

    - by Pekka
    In 2010, would you serve URLs containing UTF-8 characters in a large web portal? Unicode characters are forbidden as per the RFC on URLs (see here). They would have to be percent encoded to be standards compliant. My main point, though, is serving the unencoded characters for the sole purpose of having nice-looking URLs, so percent encoding is out. All major browsers seem to be parsing those URLs okay no matter what the RFC says. My general impression, though, is that it gets very shaky when leaving the domain of web browsers: URLs getting copy+pasted into text files, E-Mails, even Web sites with a different encoding HTTP Client libraries Exotic browsers, RSS readers Is my impression correct that trouble is to be expected here, and thus it's not a practical solution (yet) if you're serving a non-technical audience and it's important that all your links work properly even if quoted and passed on? Is there some magic way of serving nice-looking URLs in HTML http://www.example.com/düsseldorf?neighbourhood=Lörick that can be copy+pasted with the special characters intact, but work correctly when re-used in older clients?

    Read the article

  • pyODBC and Unicode Problem

    - by Aviv Giladi
    Hey guys, I'm working with pyODBC communicate with a MS SQL 2005 Express server. The table to which i'm trying to save the data consists of nvarchar columns. query = u"INSERT INTO tblPersons (name, birthday, gender) VALUES('" query = query + name + u"', '" query = query + birthday + u"', '" query = query + gender + u"')" cur.execute(query ) The variables name, birthrday and gende are read from an Excel file and they are Unicode strings. When I execute the query and either look at the table with SQL Server Management Studio or execute a query that fetches the data that was just inserted, all the data that was written in a non-English languages turn into question marks. The data that was written in English is preserved and appears in the table in the correct way. I tried adding CHARSET=UTF16 to my connection string, but had no luck with that. I can use UTF-8 which works fine but as a working convention, I need all the data saved in my DB to be UTF16. Thanks!

    Read the article

  • Dealing with ISO-encoding in AJAX requests (prototype)

    - by acme
    I have a HTML-page, that's encoded in ISO-8859-1 and a Prototype-AJAX call that's build like this: new Ajax.Request('api.jsp', { method: 'get', parameters: {...}, onSuccess: function(transport) { var ajaxResponse = transport.responseJSON; alert(ajaxResponse.msg); } }); The api.jsp returns its data in ISO-8859-1. The response contains special characters (German Umlauts) that are not displayed correctly, even if I add a "encoding: ISO-8895-1" to the AJAX-request. Does anyone know how to fix this? If I call api.jsp in a new browser window separately the special characters are also corrupt. And I can't get any information about the used encoding in the response header. The response header looks like this: Server Apache-Coyote/1.1 Content-Type application/json Content-Length 208 Date Thu, 29 Apr 2010 14:40:24 GMT Notice: Please don't advice the usage of UTF-8. I have to deal with ISO-8859-1.

    Read the article

  • JPA character encoding

    - by wheelie
    Hi there, I have a Java Web application using GlassFish 3, JSF2.0 (facelets) and JPA (EclipseLink) on MySQL (URL: jdbc:mysql://localhost:3306/administer). The problem I'm facing is that if I'm saving entities to the database with the update() method, String data loses integrity; '?' is shown instead of some characters. The server, pages and database is/are configured to use UTF-8. After I post form data, the next page shows the data correctly. Furthermore it "seems" in debug that the String property of the current entity stores the correct value too. Dunno if NetBeans debug can be trusted; might be that it decodes correctly, however it's incorrect. Any help would be appreciated, thanks in advance! Daniel

    Read the article

  • Why can I not view foreign language characters in my mysql DB?

    - by Chris
    I am inserting the following characters into my DB: ?? / ?? This is the meta tag on the page that is inserting the characters: <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> I have altered all the columns in my table that is holding the characters to be utf8_unicode_ci The foreign characters show up like so in the DB: 汉字 / 漢字 When I use a sql statement to display those foreign characters on a page, they display correctly again as: ?? / ?? I am guessing I have some setting that is not correct in my DB, since it stores it correctly, but does not display it correctly. What can i do to make the foreign language characters to display correctly in my DB?

    Read the article

  • Repair bad character due to encoding problem

    - by remi bourgarel
    Hi all, Recently we had an encoding problem in our system : If we had the string "æ" in our db ,it became "æ" on our web pages. Now this problem is solved, but the problem is that now we have a lot of "æ" in our database : users didn't see and validate pre-filled form with these characters. I found that If you read in utf 8 C3A6 you'll get "æ", if you read it in ascii you'll get "æ". It's strange because if I execute "select convert(varbinary(40),N'æ'),convert(varbinary(40),'æ')" I don't have the same result... Do you have any idea on how I can fix my database (ie change all "æ" to "æ") ? thx

    Read the article

  • Cleaning up nasty characters in PHP

    - by Simon Hume
    Hi folks, Got a little issue where my client is pasting in content from Word into my little text editor in a CMS. The double quotes are coming back encoded in what looks like some form of UTF. Any ideas if I can strip/replace these using PHP when they get displayed out of my mySQL table. Here is the link to the page that spits out the dodgy characters, you can see the 'black diamonds of doom' which are causing the headaches. http://linq.milkbarstudios.com/news_detail.php?id=3 Any suggestions would be greatly accepted!

    Read the article

  • Converting HTML special characters into their value using Python

    - by tipu
    I have a file that's littered with these: http://www.utexas.edu/learn/html/spchar.html That link just displays all sorts of HTML entities, such as – &ndash; — &mdash; ¡ &iexcl; and so on. Is it possible in Python to natively convert these characters back into their values so any occurrences of &ndash; will appear as – instead? My current approach was just to make a dict of key html entities and their utf-8 values and do search and replace, but I was wondering if there are any libraries that can take care of this for me.

    Read the article

  • iphone's nsxmlparser parsing RSS causes encoding problems

    - by Tankista
    Hi, Im working on simle RSS reader. This reader loads data from internet via this code: NSXMLParser *rss = [[NSXMLParser alloc] initWithURL:[NSURL URLWithString:@"http://twitter.com/statuses/user_timeline/50405236.rss"]]; My problem is with encoding. RSS 2.0 file is supposed to be UTF8 encoded according to encoding attribute in XML file. <?xml version="1.0" encoding="utf-8"?> So when I download URLs content I get text truncated after first occurance of char with diacritics, example: l š c t ž ý á í é, etc. I tried to solve the problem by downloading URL as UTF8 string, I used this code: NSString *rssXmlString = [NSString stringWithContentsOfURL: [NSURL URLWithString: @"http://www.macblog.sk/rss.xml"] encoding:NSUTF8StringEncoding error: nil]; NSData *rssXmlData = [rssXmlString dataUsingEncoding: NSUTF8StringEncoding]; Did not help. Thanx for your responses.

    Read the article

  • Python file input string: how to handle escaped unicode characters?

    - by Michi
    In a text file (test.txt), my string looks like this: Gro\u00DFbritannien Reading it, python escapes the backslash: >>> file = open('test.txt', 'r') >>> input = file.readline() >>> input 'Gro\\u00DFbritannien' How can I have this interpreted as unicode? decode() and unicode() won't do the job. The following code writes Gro\u00DFbritannien back to the file, but I want it to be Großbritannien >>> input.decode('latin-1') u'Gro\\u00DFbritannien' >>> out = codecs.open('out.txt', 'w', 'utf-8') >>> out.write(input)

    Read the article

  • Syncronize an SVN repo (svnsync) with encoding errors

    - by Hamish
    Is it possible to fix/bypass non-UTF8 encoded svn:log records when syncronizing repositories with svnsync? Background I'm in the process of taking over the maintenance of an open source module that is stored within a large (well over 10,000 revisions) subversion (1.5.5) repository. I do not have admin access to the remote repository to dump/filter/load the module. The old repository is being discontinued and I am trying to sync the original sub module to my local (1.6+) repository with svnsync. For example: svnsync file://home/svn/temp-repo/ http://path.to.repo/modulename/ The problem is that the old repository didn't enforce UTF8 encoding and I'm hitting errors like: svnsync: Cannot accept 'svn:log' property because it is not encoded in UTF-8 I can't modify the log property in the source repository so I need to somehow modify or ignore the property value when the encoding is unknown/invalid. Any ideas? For example, is it possible to write a pre-revprop-change script to modify the log property in transit?

    Read the article

  • How to efficiently replace characters in XML document in Java?

    - by Pregzt
    I'm looking for a neat and efficient way to replace characters in XML document. There is a replacement table defined for almost 12.000 UTF-8 characters, most of them are to be replaced by single characters, but some must be replaced by two or even three characters (e.g. Greek theta should become TH). The documents can be bulky (100MB+). How to do it in Java? I came up with the idea of using XSLT, but I'm not too sure if this is the best option.

    Read the article

  • Wrong file encoding after Dist::Zilla

    - by xenoterracide
    How can I get mojibake to pass? this might be a bug in the contributors plugin. The character does not render correctly in perldoc, but does in my vim and in the extracted git log. # Failed test 'Mojibake test for blib/lib/Pod/Spell.pm' # at /home/xenoterracide/perl5/perlbrew/perls/perl-5.18.1/lib/site_perl/5.18.1/Test/Mojibake.pm line 168. # Non-UTF-8 unexpected in blib/lib/Pod/Spell.pm, line 431 (POD) here's a snippet from the source which should probably be looked at directly due to copy-paste maybe not catching an encoding issue. =item * Olivier Mengué <[email protected]> =back A little more vim exploration shows that :set filencoding is being changed to latin1 editing the file in vim seems to fix this, but since the file is being generated, how can I get it generated with the correct encoding?

    Read the article

  • UTF8 from web conten .xml file to NSString

    - by mongeta
    Hello, I can't find a way to convert some UTF8 encoding into NSString. I get some data from a URL, it's a .xml file and here is their content: <?xml version="1.0" encoding="UTF-8"?> <person> <name>Jim Fern&#225;ndez</name> <phone>555-1234</phone> </person> How I can convert the á into a á ? some code that doesn't work: NSString* newStr = [[NSString alloc] initWithData:[NSData dataWithContentsOfURL:URL] encoding:NSUTF8StringEncoding];

    Read the article

  • UTC-8 conversion

    - by leachianus
    Hey guys, I am grabbing a JSON array and storing it in a NSArray, however it includes JSON encoded UTF-8 strings, for example pass\u00e9 represents passé. I need a way of converting all of these different types of strings into the actual character. I have an entire NSArray to convert. Or I can convert it when it is being displayed, which ever is easiest. I found this chart http://tntluoma.com/sidebars/codes/ is there a convenience method for this or a library I can download? thanks, BTW, there is no way I can find to change the server so I can only fix it on my end...

    Read the article

  • Detect if PCRE was built without the --enable-unicode-properties or --enable-utf8 configuration switches

    - by Mark Baker
    I've a PHP library that uses a number of regular expressions featuring the \P expressions for multibyte strings, e.g. ((((?:\P{M}\p{M}*)+?)|(\'[^\']*\')|(\"[^\"]*\"))!)?\$?([a-z]{1,3})\$?(\d+) While this works on most builds, I've had a few reports of the regexp returning an error. Depending on Operating platform, the error messages from PCRE are: Compilation failed: PCRE does not support \L, \l, \N, \P, \p, \U, \u, or \X at offset n or Compilation failed: support for \\P, \\p, and \\X has not been compiled at offset n I know that I can probably test a regexp at the beginning of my code that uses \P, and trap for a returned error, then use that response to set a compatibility flag and provide a degraded (non UTF-8) regexp without the \P within the main body of my code based on that compatibility flag; but I was wondering if there was any simpler way to identify whether PCRE had been built without the --enable-unicode-properties or --enable-utf8 configuration switches. PHP provides access to PCRE_VERSION constant, but that won't help identify whether \P support is enabled or not.

    Read the article

  • How to encode 'á' to '&#225' with C# ?? (UTF8)

    - by Llorens Marti
    Hi all I'm trying to write an XML file with UTF-8 encode, and the original string can have invalid characters like 'á', so, i need to change these invalid characters to a valid ones. I know that there is an encoding method that take, for example, character 'á' and transform it to group of characters 'á'. I am trying to achive this with C#but i have no succes on it. I am using Encoding.UTF8 functions but i only end with the sema character (i.e: á) or a '?' character. So, do you know with is the correct way to achive this character change with C# ?? Thanks for your time and help :) LLORENS

    Read the article

  • JSF - database character encoding

    - by wheelie
    Hi there, I have a Java Web application using GlassFish 3, JSF2.0 (facelets) and JPA (EclipseLink). The problem I'm facing, is that if I'm saving entities to the database with the update() method, String data loses integrity; '?' is shown instead of some characters. The server, pages and database is/are configured to use UTF-8. After I post form data, the next page shows the data correctly. Furthermore it "seems" in debug that the String property of the current entity stores the correct value too. Dunno if NetBeans debug can be trusted; might be that it decodes correctly, however it's incorrect. Any help would be appreciated, thanks in advance! Daniel

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >