Search Results

Search found 1714 results on 69 pages for 'utf8 decode'.

Page 8/69 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Problem with unpacking zip archive with UTF-8 file names in OS X if zip was made in Windows

    - by Andrei
    I have packed my files in Windows 7 using Total Commander asking to use UTF-8 for file names. Then I tried to unpack my files in OS X, but Cyrillic names were messed. I have tried most programs -- none has helped me, so I had to use Parallels with Windows and Total Commander to get what I want. Is there any other way to do it? Is it a fault of Total Commander or I need to tune OS X settings?

    Read the article

  • UTF-8 bit representation

    - by Yanick Rochon
    I'm learning about UTF-8 standards and this is what I'm learning : Definition and bytes used UTF-8 binary representation Meaning 0xxxxxxx 1 byte for 1 à 7 bits chars 110xxxxx 10xxxxxx 2 bytes for 8 à 11 bits chars 1110xxxx 10xxxxxx 10xxxxxx 3 bytes for 12 à 16 bits chars 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx 4 bytes for 17 à 21 bits chars And I'm wondering, why 2 bytes UTF-8 code is not 10xxxxxx instead, thus gaining 1 bit all the way up to 22 bits with a 4 bytes UTF-8 code? The way it is right now, 64 possible values are lost (from 1000000 to 10111111). I'm not trying to argue the standards, but I'm wondering why this is so? ** EDIT ** Even, why isn't it UTF-8 binary representation Meaning 0xxxxxxx 1 byte for 1 à 7 bits chars 110xxxxx xxxxxxxx 2 bytes for 8 à 13 bits chars 1110xxxx xxxxxxxx xxxxxxxx 3 bytes for 14 à 20 bits chars 11110xxx xxxxxxxx xxxxxxxx xxxxxxxx 4 bytes for 21 à 27 bits chars ...? Thanks!

    Read the article

  • Is there a MySQL performance benchmark to measure the impact of utf8_unicode_ci versus utf8_general_ci?

    - by MiniQuark
    I read here and there that using the utf8_unicode_ci collation ensures a better treatment of unicode text (for example, it knowns how to expand characters such as 'œ' into 'oe' for searching and ordering) compared to the default utf8_general_ci which basically just strips diacritics. Unfortunately, both sources indicate that utf8_unicode_ci is slightly slower than utf8_general_ci. So my question is: what does "slightly slower" mean? Has anyone run benchmarks? Are we talking about a -0.01% performance impact or rather something like -25%? Thanks for your help.

    Read the article

  • In utf-8 collation, why 11- is less then 1- ?

    - by ???
    I found that the sort result in ASCII: 1- 11- and in UTF-8: 11- 1- I feel it's so counter-intuitive, and it's not dictionary order. Isn't the character '-' (002d) is always less then [0-9] (0030-0039)? What's the general rule in UTF-8 collation? And how to bypass it, just make - be less then [0-9] while keep other characters unchanged for UTF-8, in Linux? (So it can affects the result of ls --sort, sort, etc. )

    Read the article

  • Decode the string encoded through php in javascript

    - by Pankaj Khurana
    Hi, I am working on a facebook page in which i have used ajax & response is returned in json format. I have encoded the string in php. Now i want to decode that string in javascript. foreach($feedbackdetails as $feedbackdetail) { $str.= '<div class="tweet"> <img style="cursor:pointer;" id="imgVoteUp" src="http://myserver/facebook/vote_up.gif" alt="Vote Up" title="Vote Up" onclick="saveVote('.$feedbackdetail[pk_feedbackid].',1)" /> : '.$feedbackdetail[upvotecount].' <img style="cursor:pointer;" id="imgVoteDown" src="http://myserver/facebook/vote_down.gif" alt="Vote Down" title="Vote Down" onclick="saveVote('.$feedbackdetail[pk_feedbackid].',0)" /> : '.$feedbackdetail[downvotecount].' <p class="'.$pclass.'">'.$feedbackdetail[title].' by '.$feedbackdetail[name].'<br>'.$feedbackdetail[description].'</p></div>'; } $str=urlencode($str); echo '{"fbml_test":"'.$str.'"}'; Javascript Function: function saveVote(id,type,class) { contentdiv='div_'+id; processdiv='processdiv_'+id; document.getElementById(processdiv).setInnerXHTML('<span id="caric"><center><img src="http://static.ak.fbcdn.net/rsrc.php/z5R48/hash/ejut8v2y.gif" /></center></span>'); posturl='http://myserver/facebook/vote.php'; if(class==0) { class='firstmessage'; } else { class='message'; } var queryString = "?id="+id+"&type="+type+"&pclass="+class; posturl = posturl +queryString; ajax = new Ajax(); ajax.responseType = Ajax.JSON; ajax.requireLogin = true; ajax.ondone = function(data) { document.getElementById('caric').setStyle('display','none'); //new Dialog().showMessage('Dialog',data); if(data.error) { new Dialog().showMessage('Dialog',data.error); } if(data.fbml_test) { document.getElementById(contentdiv).setInnerFBML(data)); } //div_id.setInnerFBML(data); } ajax.post(posturl); } Right now i am getting encoded string how can i change it? Please help me on this Thanks Pankaj

    Read the article

  • Detect CJK characters in PHP

    - by Jasie
    Hello, I've got an input box that allows UTF8 characters -- can I detect whether the characters are in Chinese, Japanese, or Korean programmatically (part of some Unicode range, perhaps)? I would change search methods depending on if MySQL's fulltext searching would work (it won't work for CJK characters). Thanks!

    Read the article

  • Rails 2.3.5, Ruby 1.9, SQLite 3 incompatible character encodings: UTF-8 and ASCII-8BIT

    - by Daniil Harik
    Hello, I know that question with same title has been asked almost 6 month ago. I have Googled for this problem and I have not found any working solution. Has there been any fixes for this very critical problem? I need to get my website running ASAP. Just to get the site up and running I'm even ready to add utf8 conversion methods to all my variables or risk to upgrade to Rails 3 beta Thank You in advance!

    Read the article

  • UTF-8 character encoding in Java

    - by user332523
    Hello, I am having some problems getting some French text to convert to UTF8 so that it can be displayed properly, either in a console, text file or in a GUI element. The original string is HANDICAP+ES which is supposed to be HANDICAPÉES No matter how I tried converting it, it appears the same way. Any ideas on how I can do this conversion? Thanks, Cam

    Read the article

  • Python unicode Decode Error SUDs

    - by PylonsN00b
    OK so I have # -*- coding: utf-8 -*- at the top of my script and it worked for being able to pull data from the database that had funny chars(Ñ ,Õ,é,—,–,’,…) in it and store that data into variables...but I have run into other problems, see I pull my data, organize it, and then dump it into a variables like so: title = product[1] Where product[1] is from my database result set Then I load it up for Suds like so: array_of_inventory_item_submit = ca_client_inventory.factory.create('ArrayOfInventoryItemSubmit') for product in products: inventory_item_submit = ca_client_inventory.factory.create('InventoryItemSubmit') inventory_item_list = get_item_list(product) inventory_item_submit = [inventory_item_list] array_of_inventory_item_submit.InventoryItemSubmit.append(inventory_item_submit) #Call that service baby! ca_client_inventory.service.SynchInventoryItemList(accountID, array_of_inventory_item_submit) Where get_item_list sets product[1] to title and (including a whole bunch of other nodes): inventory_item_submit.Title = title So everything runs fine until I call ca_client_inventory.service.SynchInventoryItemList that contains array_of_inventory_item_submit which contains the title w/ the funky char...here is the error: Traceback (most recent call last): File "upload_all_inventory_ebay.py", line 421, in <module> ca_client_inventory.service.SynchInventoryItemList(accountID, array_of_inventory_item_submit) File "build/bdist.macosx-10.6-i386/egg/suds/client.py", line 539, in __call__ File "build/bdist.macosx-10.6-i386/egg/suds/client.py", line 592, in invoke File "build/bdist.macosx-10.6-i386/egg/suds/bindings/binding.py", line 118, in get_message File "build/bdist.macosx-10.6-i386/egg/suds/bindings/document.py", line 63, in bodycontent File "build/bdist.macosx-10.6-i386/egg/suds/bindings/document.py", line 105, in mkparam File "build/bdist.macosx-10.6-i386/egg/suds/bindings/binding.py", line 260, in mkparam File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 62, in process File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 243, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 298, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 298, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 243, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 198, in append File "build/bdist.macosx-10.6-i386/egg/suds/sax/element.py", line 251, in setText File "build/bdist.macosx-10.6-i386/egg/suds/sax/text.py", line 43, in __new__ UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 116: ordinal not in range(128) Now what? My guess is my script can take in these funky chars because I have # -*- coding: utf-8 -*- at the top but Suds does NOT have that at the top of its files. Do I really want to go and change the Suds files...we all know this is the least desired last possible solution...what can I do?

    Read the article

  • Code for decoding/encoding a modified base64 URL

    - by Kirk Liemohn
    I want to base64 encode data to put it in a URL and then decode it within my HttpHandler. I have found that Base64 Encoding allows for a '/' character which will mess up my UriTemplate matching. Then I found that there is a concept of a "modified Base64 for URL" from wikipedia: A modified Base64 for URL variant exists, where no padding '=' will be used, and the '+' and '/' characters of standard Base64 are respectively replaced by '-' and '_', so that using URL encoders/decoders is no longer necessary and has no impact on the length of the encoded value, leaving the same encoded form intact for use in relational databases, web forms, and object identifiers in general. Using .NET I want to modify my current code from doing basic base64 encoding and decoding to using the "modified base64 for URL" method. Has anyone done this? To decode, I know it starts out with something like: string base64EncodedText = base64UrlEncodedText.Replace('-', '+').Replace('_', '/'); // Append '=' char(s) if necessary - how best to do this? // My normal base64 decoding now uses encodedText But, I need to potentially add one or two '=' chars to the end which looks a little more complex. My encoding logic should be a little simpler: // Perform normal base64 encoding byte[] encodedBytes = Encoding.UTF8.GetBytes(unencodedText); string base64EncodedText = Convert.ToBase64String(encodedBytes); // Apply URL variant string base64UrlEncodedText = base64EncodedText.Replace("=", String.Empty).Replace('+', '-').Replace('/', '_'); I have seen the Guid to Base64 for URL StackOverflow entry, but that has a known length and therefore they can hardcode the number of equal signs needed at the end.

    Read the article

  • cannot output a json encoded dict containing accents (noob inside)

    - by user296546
    Hi all, here is a fairly simple example wich is driving me nuts since a couple of days. Considering the following script: # -*- coding: utf-8 -* from json import dumps as json_dumps machaine = u"une personne émérite" print(machaine) output = {} output[1] = machaine jsonoutput = json_dumps(output) print(jsonoutput) The result of this from cli: une personne émérite {"1": "une personne \u00e9m\u00e9rite"} I don't understand why their such a difference between the two strings. i have been trying all sorts of encode, decode etc but i can't seem to be able to find the right way to do it. Does anybody has an idea ? Thanks in advance. Matthieu

    Read the article

  • Why this base64 function stop working when increasing max length?

    - by flyout
    I am using this class to encode/decode text to base64. It works fine with MAX_LEN up to 512 but if I increase it to 1024 the decode function returns and empty var. This is the function: char* Base64::encode(char *src) { char* ptr = dst+0; unsigned triad; unsigned int d_len = MAX_LEN; memset(dst,'\0', MAX_LEN); unsigned s_len = strlen(src); for (triad = 0; triad < s_len; triad += 3) { unsigned long int sr = 0; unsigned byte; for (byte = 0; (byte<3)&&(triad+byte<s_len); ++byte) { sr <<= 8; sr |= (*(src+triad+byte) & 0xff); } sr <<= (6-((8*byte)%6))%6; // shift left to next 6bit alignment if (d_len < 4) return NULL; // error - dest too short *(ptr+0) = *(ptr+1) = *(ptr+2) = *(ptr+3) = '='; switch(byte) { case 3: *(ptr+3) = base64[sr&0x3f]; sr >>= 6; case 2: *(ptr+2) = base64[sr&0x3f]; sr >>= 6; case 1: *(ptr+1) = base64[sr&0x3f]; sr >>= 6; *(ptr+0) = base64[sr&0x3f]; } ptr += 4; d_len -= 4; } return dst; } Why could be causing this?

    Read the article

  • Need help with PHP URL encoding/decoding

    - by Kenan
    On one page I'm "masking"/encoding URL which is passed to another page, there I decode URL and start file delivering to user. I found some function for encoding/decoding URL's, but sometime encoded URL contains "+" or "/" and decoded link is broken. I must use "folder structure" for link, can not use QueryString! Here is encoding function: $urll = 'SomeUrl.zip'; $key = '123'; $result = ''; for($i=0; $i<strlen($urll); $i++) { $char = substr($urll, $i, 1); $keychar = substr($key, ($i % strlen($key))-1, 1); $char = chr(ord($char)+ord($keychar)); $result.=$char; } $result = urlencode(base64_encode($result)); echo '<a href="/user/download/'.$result.'/">PC</a>'; Here is decoding: $urll = 'segment_3'; //Don't worry for this one its CMS retrieving 3rd "folder" $key = '123'; $resultt = ''; $string = ''; $string = base64_decode(urldecode($urll)); for($i=0; $i<strlen($string); $i++) { $char = substr($string, $i, 1); $keychar = substr($key, ($i % strlen($key))-1, 1); $char = chr(ord($char)-ord($keychar)); $resultt.=$char; } echo '<br />DEC: '. $resultt; So how to encode and decode url. Thanks

    Read the article

  • Unicode and URI encoding, decoding and escaping in JavaScript

    - by apphacker
    If you look at this table here, it has a list of escape sequences for Unicode characters that don't actually work for me. For example for "%96", which should be a –, I get an error when trying decode: decodeURIComponent("%96"); URIError: URI malformed If I attempt to encode "–" I actually get: encodeURIComponent("–"); "%E2%80%93" I searched through the internet and I saw this page, which mentions using escape and unescape with decodeURIComponent and encodeURIComponent respectively. This doesn't seem to help because %96 doesn't show up as "–" no matter what I try and this of course wouldn't work: decodeURIComponent(escape("%96)); "%96" Not very helpful. How can I get "%96" to be a "–" with JavaScript (without hardcoding a map for every single possible unicode character I may run into)?

    Read the article

  • How is this website fixing the encoding ??

    - by Tal Galili
    Hi all, I am trying to turn this text: ×וויר. העתיד של רשתות חברתיות והתקשורת ×©×œ× ×• Into this text: ?????. ????? ?? ????? ??????? ???????? ???? Somehow, this website: http://www.pixiesoft.com/flip/ Can do it, and I would like to know how I might be able to do it myself (with whatever programming language or software) Just saving the file as UTF8 won't do it. My motivation for this question is that I have a friend's exported XML file with the garbled text which I want to turn into corrected Hebrew text file. The XML export was originally garbled by MySQL import and exports, but I don't have the information needed to fix it or traceback the problem. Thanks.

    Read the article

  • .Net using Chr() to parse text

    - by Marcx
    I'm building a simple client-server chat system. The clients send data to the server and the server resends the data to all the other clients. I'm using the TcpListener and Network stream classes to send the data between the client and the server. The fields I need to send are, for example: name, text, timestamp, etc. I separate them using the ASCII character 29. I'm also using ASCII character 30 to mark the end of the streamed data. The data is encoded with UTF8.. Is this a good approach? Will I run into problems? Are there better methods? UPDATE: Probably my question was misunderstood, so I explain it better.. Suppose to have a list of data to send from client to server, and suppose to send all the data in only one stream, how do you send these data? Using a markup Using a character as a delimiter Using a fixed length for every fields

    Read the article

  • Can MySQL automatically specify `_utf8` for inserts to UTF-8 columns?

    - by Neil
    I have a table like this, where one column is latin1, the other is UTF-8: Create Table: CREATE TABLE `names` ( `name_english` varchar(255) character NOT NULL, `name_chinese` varchar(255) character set utf8 default NULL, ) ENGINE=MyISAM DEFAULT CHARSET=latin1 When I do an insert, I have to type _utf8 before values being inserted into UTF-8 columns: insert into names (name_english = "hooey", name_chinese = _utf8 "??"); However, since MySQL should know that name_chinese is a UTF-8 column, it should be able to know to use _utf8 automatically. Is there any way to tell MySQL to use _utf8 automatically, so when I'm programatically making prepared statements, I don't have to worry about including it with the right parameters?

    Read the article

  • storing tamil values in database.

    - by rekhasathvika
    I have stored tamil content something as &agrave.......... But for some content it is stored as #2220....... So while retrieving there arise a problem with it when I try to decode it as original tamil content. How to convert the values from #2220........to &grave.......

    Read the article

  • Javascript replace query string + with a space

    - by BruceClark
    I'm grabbing the query string parameters and trying to do this: var hello = unescape(helloQueryString); and it returns: this+is+the+string instead of: this is the string Works great if %20's were in there, but it's +'s. Any way to decode these properly so they + signs move to be spaces? Thanks.

    Read the article

  • rails, mysql charsets & encoding: binary

    - by Benjamin Vetter
    Hi, i've a rails app that runs using utf-8. It uses a mysql database, all tables with mysql's default charset and collation (i.e. latin1). Therefore the latin1 tables contain utf-8 data. Sure, that's not nice, but i'm not really interested in it. Everything works fine, because the connection encoding is latin1 as well and therefore mysql does not convert between charsets. Only one problem: i need a utf-8 fulltext index for one table: mysql> show create table autocompletephrases; ... AUTO_INCREMENT=310095 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci But: I don't want to convert between charsets in my rails app. Therefore I would like to know if i could just set config/database.yml production: adapter: mysql >>>> encoding: binary ... which just calls SET NAMES 'binary' when connecting to mySQL. It looks like it works for my case, because i guess it forces mysql to -not- convert between charsets (mySQL docs). Does anyone knows about problems about doing this? Any side-effects? Or do you have any other suggestions? But i'd like to avoid converting my whole database to utf-8. Many Thanks! Benjamin

    Read the article

  • problem with mysql character set & GWT

    - by Ehsan Khodarahmi
    Hi I've a SmartGWT application which interacts with a mysql database using rpc services. Suppose it as a simple form with a textbox & two save & load buttons. My database & tables & all fields collation is utf8_persian_ci. All java source files & module html & xml files have saved with utf8 character set. & also I've a meta tag in module html file which contains my form : <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> my application works correctly in eclipse develpment mode & also in my local tomcat server. Then i put it on remote server (I compress it using jar.exe into a war file with -cvf flag & then upload it using my server's plesk control panel). In this mode, when I load data from a mysql table (load a record from any table), data will load into my form with no problem, but when I want to save some data (in persian language), mysql just writes some ? (question sign) in characteristic table fields. Any idea ?

    Read the article

  • Android - Read PNG image without alpha and decode as ARGB_8888 Android

    - by loki666
    I try to read an image from sdcard (in emulator) and then create a Bitmap image with the "BitmapFactory.decodeByteArray" method. I set the options: options.inPrefferedConfig = Bitmap.Config.ARGB_8888 options.inDither = false Then I extract the pixels into a ByteBuffer. ByteBuffer buffer = ByteBuffer.allocateDirect(width*height*4) bitmap.copyPixelsToBuffer(buffer) I use this ByteBuffer then in the JNI to convert it into RGB format and want to calculate on it. But always I get false data - I test without modifying the ByteBuffer. Only thing I do is to put it into the native method into JNI. Then cast it into a unsigned char* and convert it back into a ByteBuffer before returning it back to Java. Before displaying the image I get data back with bitmap.copyPixelsFromBuffer( buffer ) But then it has wrong data in it. My Question is if this is because the image is internally converted into RGB 565 or what is wrong here? May anybody has an idea, it would be great!

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >