Search Results

Search found 1714 results on 69 pages for 'utf8 decode'.

Page 5/69 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • replacing characters with UTF-8 after using mysql_set_charset('utf8') function

    - by Ahmet vardar
    I converted all mysql tables to utf-8_unicode and started using mysql_set_charset('utf8'); function. But after this, some characters like S, Ö started looking like Ö , Åž How can i replace this kinda letters in mysql with UTF-8 format ? shortly, can i find a list of all these kinda characters to replace ? EDIT: He is explaining about this issue in this article actually but i cannot understand it properly acutally lol http://www.oreillynet.com/onlamp/blog/2006/01/turning_mysql_data_in_latin1_t.html

    Read the article

  • Mysql SET NAMES UTF8 - how to get rid of?

    - by Nir
    In a very busy PHP script we have a call at the beginning to "Set names utf8" which is setting the character set in which mysql should interpret and send the data back from the server to the client. http://dev.mysql.com/doc/refman/5.0/en/charset-applications.html I want to get rid of it so I set default-character-set=utf8 In our server ini file. (see link above) The setting seems to be working since the relevant server parameters are : 'character_set_client', 'utf8' 'character_set_connection', 'utf8' 'character_set_database', 'latin1' 'character_set_filesystem', 'binary' 'character_set_results', 'utf8' 'character_set_server', 'latin1' 'character_set_system', 'utf8' But after this change and commenting out set names utf8 call still the data starts to come out garbled. Please advise....

    Read the article

  • Problem with SVN filename encoding on Mac OS X

    - by Albert
    I have some filename with some Unicode character in it. All filenames on Mac OS X are UTF8 encoded. Also $LANG is set to en_US.UTF-8. However, it seems svn has some problems with that: az@ip212 1054 (Integration) %ls Abbildungen Verbesserungsvorschläge_Applets.odt AllgemeineAnmerkungen.rtf Verbesserungsvorschläge_Applets.rtf Geogebra Vorlagen Texte az@ip212 1055 (Integration) %svn ls Abbildungen/ AllgemeineAnmerkungen.rtf Geogebra/ Texte/ Verbesserungsvorschläge_Applets.rtf Verbesserungsvorschläge_Applets.odt Vorlagen/ az@ip212 1056 (Integration) %svn del Verb*.odt svn: Use --force to override this restriction svn: 'Verbesserungsvorschläge_Applets.odt' is not under version control az@ip212 1057 (Integration) %svn status ? Verbesserungsvorschläge_Applets.odt ! Verbesserungsvorschläge_Applets.odt az@ip212 1058 (Integration) % As you can see, svn del does not recognize the filename. And even svn status gets confused about it. How can I fix this? I also tried with LC_CTYPE=$LANG LC_ALL=$LANG LC=$LANG but no change.

    Read the article

  • UTF-8 locale portability (and ssh)

    - by kine
    I spend a lot of my time sshed into various machines, all of which are different (some are embedded, some run Linux, some run BSD, &c.). On my own local machines, however, i use OS X, which of course has a userland based on FreeBSD. My locale on those machines is set to en_GB.UTF-8, which is one of the available options: % echo `sw_vers` ProductName: Mac OS X ProductVersion: 10.8.2 BuildVersion: 12C60 % locale -a | grep -i 'en_gb.utf' en_GB.UTF-8 Several of the more-capable Linux systems i use appear to have an equivalent option, but i note that on Linux the name is slightly different: % lsb_release -d Description: Debian GNU/Linux 6.0.3 (squeeze) % locale -a | grep -i 'en_gb.utf' en_GB.utf8 This makes me wonder: When i ssh into a Linux machine from my Mac, and it forwards all of my LC_* variables with that 'UTF-8' suffix, does that Linux machine even understand what is being asked of it? Or is it just falling back to some other locale? In either case, what is the mechanism behind its behaviour, and is it dependent on any particular set-up (e.g., will i see the same behaviour on a BusyBox-based system as on a GNU-based one)?

    Read the article

  • UTF8 from web conten .xml file to NSString

    - by mongeta
    Hello, I can't find a way to convert some UTF8 encoding into NSString. I get some data from a URL, it's a .xml file and here is their content: <?xml version="1.0" encoding="UTF-8"?> <person> <name>Jim Fern&#225;ndez</name> <phone>555-1234</phone> </person> How I can convert the á into a á ? some code that doesn't work: NSString* newStr = [[NSString alloc] initWithData:[NSData dataWithContentsOfURL:URL] encoding:NSUTF8StringEncoding];

    Read the article

  • UTF8 issues on Linux

    - by user363808
    Hi, I have some code that fetches some data from the database, database codepage is UTF8. When I run the code on a linux box, some characters come out as question marks (?) but when I run the same code on a windows server, all characters appear correctly. When I do: $ $LANG Following is returned en_SG.UTF-8 en_SG is something that doesn't look correct, it should be en_US but the latter part of the returned string is UTF-8 which is good. Is there anything else that I can look into to fix the character corruption problem?

    Read the article

  • Detect if PCRE was built without the --enable-unicode-properties or --enable-utf8 configuration switches

    - by Mark Baker
    I've a PHP library that uses a number of regular expressions featuring the \P expressions for multibyte strings, e.g. ((((?:\P{M}\p{M}*)+?)|(\'[^\']*\')|(\"[^\"]*\"))!)?\$?([a-z]{1,3})\$?(\d+) While this works on most builds, I've had a few reports of the regexp returning an error. Depending on Operating platform, the error messages from PCRE are: Compilation failed: PCRE does not support \L, \l, \N, \P, \p, \U, \u, or \X at offset n or Compilation failed: support for \\P, \\p, and \\X has not been compiled at offset n I know that I can probably test a regexp at the beginning of my code that uses \P, and trap for a returned error, then use that response to set a compatibility flag and provide a degraded (non UTF-8) regexp without the \P within the main body of my code based on that compatibility flag; but I was wondering if there was any simpler way to identify whether PCRE had been built without the --enable-unicode-properties or --enable-utf8 configuration switches. PHP provides access to PCRE_VERSION constant, but that won't help identify whether \P support is enabled or not.

    Read the article

  • Dont know how to select a few records from a table as utf8

    - by kwokwai
    Hi all, I don't have phpMyAdmin installed in my web site. Sometimes I was doing some select SQL command at the backend, but when I typed in this command to show all records from table Users: select * from Users; The records were printed as ???? | ??? ??? ??? |. I don't want to make any permanent changes to the charset in the database, so, how is it possible to temporarily displayed a few records as utf8 when needed?

    Read the article

  • Activesupport::JSON.decode crashes on this,

    - by Waheedi
    I wonder why i cant decode this json string, all what i want is to convert this to a proper Ruby hash, anyone have an idea? i think the array of objects is cracking it ? Parameters: {"{\"origins\":"=>{"{\"origin\":\"this\"},{\"origin\":\"dont\"},{\"origin\":\"dont me please\"},{\"origin\":\"and me please\"},{\"origin\":\"dont\"},{\"origin\":\"dont\"},{\"origin\":\"dont\"},{\"origin\":\"okay\"},{\"origin\":\"dont\"},{\"origin\":\"go\"},{\"origin\":\"go\"}"=>{",\"url\":\"file:///Users/waheed/Desktop/untitled.html\",\"apik\":\"helloapik\",\"host\":\"http://localhost:3000/\"}"=>nil}}} now in my javascript im doing this //this is the object im trying to send over xmlhttprequest and im using JSON.org library which has the stringify method function tObject(origins,url,apik){ this.origins=origins; //this is an array of string this.url=url; this.apik=apik; } var t = new tObject(myStringArr,"www.foo.com","welcome guys"); ajax = new Ajax(); //this is an xhcon class you dont worry about it url here http://xkr.us/code/javascript/XHConn/ ajax.connect("http://localhost:3000/","POST",JSON.stringify(t), callback); in my rails app the parameters that has been posted looks like this: Parameters: {"{\"origins\":"={"{\"origin\":\"this\"},{\"origin\":\"yo yo\"},{\"origin\":\" me please\"},{\"origin\":\"and me please\"},{\"origin\":\"here\"},{\"origin\":\"and again\"},{\"origin\":\"again\"},{\"origin\":\"okay\"},{\"origin\":\"yes\"},{\"origin\":\"go\"},{\"origin\":\"go\"}"={",\"url\":\"www.foo.com\",\"apik\":\"welcome guys\"}"=nil}}} why it results with nil at the last ? i've tried to decode it but it could not work because it blame the string is not json string ?!!? TIA, waheedi

    Read the article

  • Why is python decode replacing more than the invalid bytes from an encoded string?

    - by dangra
    Trying to decode an invalid encoded utf-8 html page gives different results in python, firefox and chrome. The invalid encoded fragment from test page looks like 'PREFIX\xe3\xabSUFFIX' >>> fragment = 'PREFIX\xe3\xabSUFFIX' >>> fragment.decode('utf-8', 'strict') ... UnicodeDecodeError: 'utf8' codec can't decode bytes in position 6-8: invalid data What follows is the summary of replacement policies used to handle decoding errors by python, firefox and chrome. Note how the three differs, and specially how python builtin removes the valid S (plus the invalid sequence of bytes). by Python The builtin replace error handler replaces the invalid \xe3\xab plus the S from SUFFIX by U+FFFD >>> fragment.decode('utf-8', 'replace') u'PREFIX\ufffdUFFIX' >>> print _ PREFIX?UFFIX The python implementation builtin replace error handler looks like: >>> python_replace = lambda exc: (u'\ufffd', exc.end) As expected, trying this gives same result than builtin: >>> codecs.register_error('python_replace', python_replace) >>> fragment.decode('utf-8', 'python_replace') u'PREFIX\ufffdUFFIX' >>> print _ PREFIX?UFFIX by Firefox Firefox replaces each invalid byte by U+FFFD >>> firefox_replace = lambda exc: (u'\ufffd', exc.start+1) >>> codecs.register_error('firefox_replace', firefox_replace) >>> test_string.decode('utf-8', 'firefox_replace') u'PREFIX\ufffd\ufffdSUFFIX' >>> print _ PREFIX??SUFFIX by Chrome Chrome replaces each invalid sequence of bytes by U+FFFD >>> chrome_replace = lambda exc: (u'\ufffd', exc.end-1) >>> codecs.register_error('chrome_replace', chrome_replace) >>> fragment.decode('utf-8', 'chrome_replace') u'PREFIX\ufffdSUFFIX' >>> print _ PREFIX?SUFFIX The main question is why builtin replace error handler for str.decode is removing the S from SUFFIX. Also, is there any unicode's official recommended way for handling decoding replacements?

    Read the article

  • OS X: Terminal output of javac is garbled.

    - by Don Werve
    I've got my computer set up in Japanese (hey, it's good language practice), and everything is all fine and dandy... except javac. It displays localized error messages out to the console, but they're in Shift-JIS, not UTF8: $ javac this-file-doesnt-exist.java javac: ?t?@?C??????????????: this-file-doesnt-exist.java ?g????: javac <options> <source files> ?g?p?\??I?v?V?????~??X?g?????A-help ???g?p???? If I pipe the output through nkf -w, it's readable, but that's not really much of a solution: $ javac this-file-doesnt-exist.java 2>&1 | nkf -w javac: ????????????: this-file-doesnt-exist.java ???: javac <options> <source files> ????????????????????-help ?????? Everything else works fine (with UTF8) from the command-line; I can type filenames in Japanese, tab-completion works fine, vi can edit UTF-8 files, etc. Although java itself spits out all its messages in English (which is fine). Here's the relevant bits of my environment: LC_CTYPE=UTF-8 LANG=ja_JP.UTF-8 From what it looks like, javac isn't picking up the encoding properly, and java isn't picking up the language at all. I've tried -Dfile.encoding=utf8 as well, but that does nada, and documentation on the localization of the JVM toolchain is pretty nonexistent, at least from Google.

    Read the article

  • Is there stl and utf8 friendly C++ Wrapper for ICU, or other powerful unicode library

    - by artyom
    Hello, I need a good Unicode library for C++. I need Transformations in Unicode sensitive way. For example sort all strings in case insensitive way and get their first characters for index. Convert to upper and to lower various Unicode strings. Split text in reasonable position -- words that would work for Chinese and Japanese as well. Formatting numbers, dates in locale sensitive way (should be thread safe). Transparent support of utf8 (primary internal representation). As far as I know the best library is ICU. However, I can't find normal developer friendly API documentation with examples. Also as far as I see, it is not too friendly with modern C++ design, work with STL and so on. Like this std::string msg; unistring umsg.from_utf8(msg); unistring::word_iterator wi; for(wi=umsg.words().begin(),n=0;wi!=usmg.words().wi_end(),n<10;++wi,++n) ; msg=umsg.substr(umsg.words().begin(),wi).to_utf8(); cout<<_("Five 10 words are ")<<msg; Does anybody know good STL friendly ICU wrapper released under Open Source license preferred permissive like MIT or Boost, but others LGPLv2 compatible are ok as well. Is there another high quality library similar to ICU? Platform: UNIX/POSIX, Windows support is not required. Thanks, Artyom Edit: Unfortunatly I wasn't logged in so I can't make asnver accepted... I had attached the ansver by myself.

    Read the article

  • reading Twitter API with JSON framework

    - by iPixFolio
    Hi, I'm building a twitter reader into an app. I'm using this JSON library to parse the twitter API. I'm seeing some odd results on certain messages. I know that the Twitter API returns results in UTF8 format. I'm wondering if I'm doing something wrong when reading the JSON parsed fields. My code is spread out across multiple classes so it's hard to give a concise code drop with the symptoms, but here's what I've got: I am using ASIHTTP for async HTTP processing. Here is processing a response from ASIHTTP: ... NSMutableString* tempString = [[NSMutableString alloc] initWithString:[request responseString]]; NSError *error; SBJSON *json = [[SBJSON alloc] init]; id JSONresponse = [json objectWithString:tempString error:&error]; [tempString release]; [json release]; if (JSONresponse) { self.response = JSONresponse; ... self.response holds the JSON representation of the result from the Twitter call. Now, I will take the JSON response and write each tweet into a container object (Tweet). in the following code, the response from above is referenced as request.response: ... // save list of albums to local cache for (NSDictionary* response in request.response) { Tweet* tweet = [[Tweet alloc] init]; tweet.text = [response objectForKey:@"text"]; tweet.id = [response objectForKey:@"id"]; tweet.created = [response objectForKey:@"created_at"]; [Tweet addTweet:tweet]; [tweet release]; } ... at this point, I have a container holding the tweets. I'm only keeping 3 fields from the tweet: "id", "text", and "created_at". the "text" field is the problem field. To display the tweets, I build an HTML page from the container of tweets, like this: ... Tweet* tweet = nil; for (int i = 0; i < [Tweet tweetCount]; i++) { tweet = [Tweet tweetAtIndex:i]; [html appendString:@"<div class='tweet'>"]; [html appendFormat:@"<div class='tweet-date'>%@</div>", tweet.created ]; [html appendFormat:@"<div class='tweet-text'>%@</div>", tweet.text ]; [html appendString:@"</div>"]; } ... In another routine, I save the HTML page to a temp file. if (html && [html length] > 0 ) { NSString* uniqueString = [[NSProcessInfo processInfo] globallyUniqueString]; NSString* filename = [NSString stringWithFormat:@"%@.html", uniqueString ]; filename = [tempDir stringByAppendingPathComponent:filename]; NSError* error = nil; [html writeToFile:filename atomically:NO encoding:NSUTF8StringEncoding error:&error]; ... I then create a URLRequest from the file and load it into an UIWebview: NSURL* url = [NSURL fileURLWithPath:filename]; NSURLRequest* request = [NSURLRequest requestWithURL:url]; [self.webView loadRequest:request]; ... At this point, I can see the tweets in a browser window. some of the tweets will show invalid characters like this: iPhone 4 ad spoofed with Glee’s Jane Lynch ... Glee’s should be Glee's Can anybody shed any light on what I'm doing wrong and offer suggestions on how to fix? basically, to summarize: I'm reading a UTF8 feed with JSON I write the UTF8 strings into an HTML file I display the HTML file with UIWebview. some of the UTF8 strings are not properly decoded. I need to know where to decode them and how to do it. thanks! Mark

    Read the article

  • ATI gpu (video accel, decode, encode, ATI Stream, DXVA)

    - by Shiki
    Okay its a long question title for sure. I'm looking for a new video card (yes,SU is not a page for that, but wait). I've been a loyal NVidia customer ever since, now using a 8600gts. Old but still somewhat good, its a bit slow though. I want an upgrade because 8600gts wont support better vdpau and new features. I checked out the prices and the documents, I would need a GTX260 card. Which costs ..well.. a lot. ATI performs much better for that price. (At least on every test it outperforms GTX260). However, as far as I know there is no gpu accel with ATI. The things you can use is DXVA only, no other method. Could you correct me out there? Will be there a gpu accel for ATI also? Or is there one available? (DXVA is not bad, but kinda slow compared to NVIdia's CUDA.) What about openCL? How does ATI support that? (I'm talking about the 5850 ATI card at the minute, I would buy that instead of the NVidia.)

    Read the article

  • OpenSSL decode not working

    - by JL
    I am trying to use the following command: openssl enc -base64 -in myfile -out myfile.b64 For more info this link, has full instructions. Nothing happens and via a DOS window, it just doesn't work. Any suggestions why?

    Read the article

  • How decode xfs lost+found directory

    - by Satpal
    I have managed to trash my homebrew Nas box (an old hp d530 + 2x 750gb sata soft raid1 + 17gb boot disk with ubuntu server 8.10) I have searched the web and tried to repair the file system but to no avail :( I was thinking that the dirs/files located under the root of the lost+found directory are 64 bit numbers. Is there any way that I could decant the number into binary form, from there reconstruct the directory/file structure. More to the point can anyone point to the information on how xfs inodes are broken down(does that make sense)?

    Read the article

  • Alternative to System.Web.HttpUtility.HtmlEncode/Decode?

    - by Jörg Battermann
    Is there any 'slimmer' alternative to the System.Web.HttpUtility.HtmlEncode/.Decode functions in .net 3.5 (sp1)? A separate library is fine.. or even 'wanted', at least something that does not pull in a 'whole new world' of dependencies that System.Web requires. I only want to convert a normal string into its xml/xhtml compliant equivalent (& back).

    Read the article

  • How to decode ASN.1 to XML with erlang

    - by shian
    Hi I use asn1 module in erlang to decode. The output is like {'UL-CCCH-Message',asn1_NOVALUE, {rrcConnectionRequest, {'RRCConnectionRequest', {'tmsi-and-LAI', {'TMSI-and-LAI-GSM-MAP', [1,0,0,0,0,1,1,1,1,1,1,0,1,0,0,0,0,1,1,0,1,1,0,1,0,1,1, 1,1,0,1,0], {'LAI', {'PLMN-Identity',[2,2,6],[0,1]}, [0,1,1,1,1,0,0,1,1,0,0,0,1,1,0,1]}}}, terminatingBackgroundCall,noError, {'MeasuredResultsOnRACH', {'MeasuredResultsOnRACH_currentCell', {fdd, {'MeasuredResultsOnRACH_currentCell_modeSpecificInfo_fdd', {'cpich-Ec-N0',39}}}}, asn1_NOVALUE}, asn1_NOVALUE}}} How can I output XML instead of erlang term?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >