Search Results

Search found 7190 results on 288 pages for 'character codes'.

Page 23/288 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • What are Windows code pages?

    - by Mike D
    I'm trying to gain a basic understanding of what is meant by a Windows code page. I kind of get the feeling it's a translation between a given 8 bit value and some 'abstraction' for a given character graphic. I made the following experiment. I created a "" character literal with two versions of the letter u with an umlaut. One created using the ALT 129 (uses code page 437) value and one using the ALT 0252 (uses code page 1252) value. When I examined the literal both characters had the value 252. Is 252 the universal 8 bit abstraction for u with an umlaut? Is it the Unicode value? Aside from keyboard input are there any library routines or system calls that use code pages? For example is there a function to translate a string using a given code table (as above for the ALT 129 value)?

    Read the article

  • java regex illegal escape character error not occurring from command line arguments

    - by Shades88
    This simple regex program import java.util.regex.*; class Regex { public static void main(String [] args) { System.out.println(args[0]); // #1 Pattern p = Pattern.compile(args[0]); // #2 Matcher m = p.matcher(args[1]); boolean b = false; while(b = m.find()) { System.out.println(m.start()+" "+m.group()); } } } invoked by java regex "\d" "sfdd1" compiles and runs fine. But if #1 is replaced by Pattern p = Pattern.compile("\d");, it gives compiler error saying illegal escape character. In #1 I also tried printing the pattern specified in the command line arguments. It prints \d, which means it is just getting replaced by \d in #2. So then why won't it throw any exception? At the end it's string argument that Pattern.compile() is taking, doesn't it detect illegal escape character then? Can someone please explain why is this behaviour?

    Read the article

  • how to read character behind some text

    - by klox
    this one a code for read two character behind text "KD-R411ED" var code = data[0].substr(data[0].length - 2); how to read ED character if text like KD-R411H2EDT? i want a new code can combine with code above..please help!! look this: $("#tags1").change(function() { var barcode; barCode=$("#tags1").val(); var data=barCode.split(" "); $("#tags1").val(data[0]); $("#tags2").val(data[1]); var code = data[0].substr(data[0].length - 2); // suggested by Jan Willem B if (code == 'UD') { $('#check1').attr('checked','checked'); } else { if (code == 'ED') { $('#check2').attr('checked','checked'); } }

    Read the article

  • Windows code pages, what are they?

    - by Mike D
    I'm trying to gain a basic understanding of what is meant by a Windows code page. I kind of get the feeling it's a translation between a given 8 bit value and some 'abstraction' for a given character graphic. I made the following experiment. I created a "" character literal with two versions of the letter u with an umlaut. One created using the ALT 129 (uses code page 437) value and one using the ALT 0252 (uses code page 1252) value. When I examined the literal both characters had the value 252. Is 252 the universal 8 bit abstraction for u with an umlaut? Is it the Unicode value? Aside from keyboard input are there any library routines or system calls that use code pages? For example is there a function to translate a string using a given code table (as above for the ALT 129 value)?

    Read the article

  • Handling newline character in input between Windows and Linux

    - by Fazal
    I think this is a standard problem which may have been asked before but I could not get the exact answer so posting the issue. The issue is that our server is running on a linux box. We access the server over the browser on a window box to enter data into field which is supposed to contain multiple lines which user can enter by pressing the enter key after each line Abc Def GHI When this input field (this is a text area),is read on the linux machine, we want to split the data based on new line character. I had three question on this. Does the incoming data contain "\r\n" or "\n" If incoming data does contain "\r\n", the linux line.separator property (vm property) would not work for me as it would say "\n" and therefore may leave "\r" in the data. If "\r" is left in the data, if I open the file on a windows machine, will this mean a newline character? Finally can anyone tell me the standard way to deal with this issue?

    Read the article

  • Mysql SET NAMES UTF8 - how to get rid of?

    - by Nir
    In a very busy PHP script we have a call at the beginning to "Set names utf8" which is setting the character set in which mysql should interpret and send the data back from the server to the client. http://dev.mysql.com/doc/refman/5.0/en/charset-applications.html I want to get rid of it so I set default-character-set=utf8 In our server ini file. (see link above) The setting seems to be working since the relevant server parameters are : 'character_set_client', 'utf8' 'character_set_connection', 'utf8' 'character_set_database', 'latin1' 'character_set_filesystem', 'binary' 'character_set_results', 'utf8' 'character_set_server', 'latin1' 'character_set_system', 'utf8' But after this change and commenting out set names utf8 call still the data starts to come out garbled. Please advise....

    Read the article

  • Load JSON in Python as header chracterset

    - by mridang
    Hi everyone, I've always found character-sets and encodings complicated to understand and here I'm faced with another problem. My apologies for any inaccuracies. I'll do my best. I'm requesting data from a server which returns JSON. In the HTTP headers it also returns the character.set like so: Content-Type: text/html; charset=UTF-8 I'm using the JSON library in python to load the JSON using the json.loads method. When I pass it the returned JSON, it gives me a dictionary in Unicode. I've Googled around and I know that JSON should return Unicode as JavaScript strings are Unicode objects. How can I load the JSON as UTF-8. I would like to use the same encoding as specified in the response header. I've read this post but it didn't help. Thank you.

    Read the article

  • [C++] instantiating bitset using hex character.

    - by bndz
    Hey, I'm trying to figure out how to instantiate a 4 bit bitset based on a hex character. For instance, If I have a character with value 'F', I want to create a bitset of size 4 initialized to 1111 or if it is A, i want to initialize it to 1010. I could use a bunch of if statements like so: fn(char c) { bitset<4 temp; if(c == 'F') temp.set(); //... if(c == '9') { temp.set(1); temp.set(3); } //... } This isn't efficient, is there a way of easily converting the string to a decimal integer and constructing the bitset using the last 4 bits of the int? Thanks for any help.

    Read the article

  • Removing the Default Wrap Character From all records

    - by aceinthehole
    I am using BizTalk 2009 and I have a flat file that is similar to the following "0162892172","TIM ","LastName ","760 "," ","COMANCHE ","LN " "0143248282","GEORGE ","LastName ","625 "," ","ENID ","AVE " When I parse it and start mapping it I need to get rid of the quotation marks. I have marked the Wrap Character attribute for the schema as a quotation mark but it doesn't remove it when BizTalk is parsing the file. Is there an easy way to specify the removal of a wrap character or am I going to have to run it through a script functiod every time? Also I would like to be able to remove the trailing spaces as well, if at all possible.

    Read the article

  • Accented character replacement for search then reinserted afterwards

    - by user314573
    Basically my issue is that users would like to search for a french word that has accented characters but without typing in the accented characters and then have the actual accented word appeared highlighted if found... So for example they would type in "declare" but in the result sets it would look like "déclare" and if found "déclare" would be highlighted. My first thought was to just simply replace the characters with a regex but then I remembered that I would need to re-insert the replaced characters after the search... I was thinking of then using some sort of character map that would track position and the character so that when the search was finshed I could put the result set back to the way it was. This seems a little brute force to me and I was wondering if anyone had a better alternative? I'm using Visual Studio 2005 with this app. Any advice would be much appreciated! Thanks

    Read the article

  • Custom iPad keyboard backspace method last character crash

    - by isaaclimdc
    I'm making a custom UI iPad keyboard app, and I'm doing something similar to this post for the backspace method: custom backspace button crashes iPhone app However, the app will always crash when I "backspace" for the last character in the UITextView. I know -substringToIndex won't work for an empty string, but I tried using a temporary mutable string then using -deleteCharactersInRange, and that crashed it too. I'm guessing the crash is due to me manually setting the -selectedRange property for the text view after deleting a character? But even if I do: textView.selectedRange = NSMakeRange(0, 0); the app will crash. Any ideas?

    Read the article

  • Incorrect string encodings

    - by James
    Note: I have read all of the related PHP, UTF-8, character encoding articles that are usually suggested, but my question relates to data inserted before I applied such techniques. I am wishing to retrospectively fix all character encoding problems. Now all connections are set as utf8 using PDO. PDO::MYSQL_ATTR_INIT_COMMAND => 'SET NAMES utf8' Unfortunately, a large amount of data was inserted that is of questionable encoding before I had implemented correct character encoding practices. As displayed by: $sql = "SELECT name FROM data LIMIT 3"; foreach ($pdo->query($sql) as $row) { $name = $row['name']; echo $name . "\n"; echo utf8_encode($name) . "\n"; echo utf8_decode($name) . "\n"; echo htmlspecialchars($name, ENT_QUOTES, 'UTF-8') . "\n"; echo htmlspecialchars(utf8_encode($name), ENT_QUOTES, 'UTF-8') . "\n"; echo htmlspecialchars(utf8_decode($name), ENT_QUOTES, 'UTF-8') . "\n"; echo '<hr/>'; } Which produces: Antonín Dvořák AntonÃÆín DvoÃâ¦Ãâ¢ÃÆák Anton??­n Dvo??????¡k Antonín Dvořák AntonÃÆín DvoÃâ¦Ãâ¢ÃÆák ---------- Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶ ñÃâ¬Ã¡Ã´ ýáùáÿÃâ¬ÃµÃ¡Ã¶ ????? ?????????? Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶ ñÃâ¬Ã¡Ã´ ýáùáÿÃâ¬ÃµÃ¡Ã¶ ---------- Tiësto Tiësto Tiësto Tiësto Tiësto Tiësto ---------- When removing 'SET NAMES utf8' with PDO it produces the data: Antonín DvoÅák Antonín DvoÃÂák Antonín Dvorák Antonín DvoÅák Antonín DvoÃÂák Antonín Dvorák ---------- ???? ????????? Ô±ÖÕ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿ÖÕµÕ¡Õ¶ ???? ????????? ???? ????????? Ô±ÖÕ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿ÖÕµÕ¡Õ¶ ???? ????????? ---------- Tiësto Tiësto Ti?sto Tiësto Tiësto ---------- And here is a dump of the database rows concerned: DROP TABLE IF EXISTS `data`; CREATE TABLE IF NOT EXISTS `data` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(80) NOT NULL, PRIMARY KEY (`id`), KEY `name` (`name`(10)), ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=0; INSERT INTO `data` (`id`, `name`) VALUES (0, 'Antonín Dvořák'), (1, 'Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶'), (2, 'Tiësto'); The 3rd and 6th lines of the 3rd row "Tiësto" are then correctly echoed. I'm just unsure what is the best way to correct encodings/detect the encodings of bad strings and correct, etc.

    Read the article

  • In python writing from XML to CSV, encoding error

    - by user574435
    Hi, I am trying to convert an XML file to CSV, but the encoding of the XML ("ISO-8859-1") apparently contains characters that are not in the ascii codec which Python uses to write rows. I get the error: Traceback (most recent call last): File "convert_folder_to_csv_PLAYER.py", line 139, in <module> xml2csv_PLAYER(filename) File "convert_folder_to_csv_PLAYER.py", line 121, in xml2csv_PLAYER fout.writerow(row) UnicodeEncodeError: 'ascii' codec can't encode character u'\xe1' in position 4: ordinal not in range(128) I have tried opening the file as follows: dom1 = parse(input_filename.encode( "utf-8" ) ) and I have tried replacing the \xe1 character in each row before it is written. Any suggestions?

    Read the article

  • Visual Studio 2005 - strange characters rendered for ANSI text

    - by Apogee
    Hi all, Has anyone seen this odd text rendering issue in VS2005 before? The first line of using statements actually says "using System;". If I copy the line as it is displayed and paste into notepad, the text appears correctly, so clearly the character codes are correct. In addition, the solution compiles and runs correctly. I was thinking it might be due to ClearCase using a different character encoding as all the solutions we're using were freshly checked-out yesterday on to a new build machine, but this is only happening in 2 of our ~30 solutions. Incidentally the same .cs files when opened in VS2008 render correctly on this machine, could this be a corruption in VS2005?

    Read the article

  • Running python batch file that has a path with SPACE character

    - by prosseek
    The batch file is something like this, I put the python in some directory that has SPACE character in its path. C:\"Documents and Settings"\Administrator\Desktop\bracket\python\python C:\\"Documents and Settings"\\Administrator\\Desktop\\bracket\\[10,20]\\brackettest.py When I run this one, I get this error. C:\Documents and Settings\Administrator\Desktop\bracket\python\python: can't ope n file 'C:\Documents and Settings\\Administrator\\Desktop\\bracket\\[10,20]\\bra ckettest.py': [Errno 2] No such file or directory C:\Documents and Settings\Administrator\Desktop\bracket What might be wrong? Wrapping the path doesn't solve this problem. "C:\\Documents and Settings\\Administrator\\Desktop\\bracket\\[10,20]\\brackettest.py" Are the brackets ('[]') cause of the problem? On Mac, python works well with bracket character.

    Read the article

  • Trouble converting string/character to byte in lisp

    - by WanderingPhd
    I've some data that I'm reading in using read-line and I want to convert it into a byte-array. babel:string-to-octet works for the most part except when the character\byte is larger (above 200) in which case it returns two numbers. As an example, if the character is ú using babel:string-to-octet returns (195 185) instead of 250 which is what I'm looking for. I tried a number of encodings in babel but none of them seem to work. If I use read-byte or read-sequence it does read in 250. But for reasons of backward compatibility, I'm left with using read-line and I would like to know if there is something I'm missing when using babel:string-to-octet to convert ú to 250. I'm using ccl 1.8 btw.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >