Search Results

Search found 1714 results on 69 pages for 'utf8 decode'.

Page 17/69 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How do I avoid the "S to Skip" message on boot?

    - by Marty
    After upgrading my laptop from karmic to lucid, my fat32 partition won't mount automatically. I get the message: The disk drive for /osshare is not ready yet or not present Continue to wait; or Press S to skip mounting or M for manual recovery Funny thing is, if I skip, then /osshare/ is mounted once I log in. I've a similar setup on my desktop, and it works fine. Fstab on desktop: UUID=4663-6853 /osshare vfat utf8,umask=007,gid=46 0 1 /etc/fstab on laptop: UUID=1234-5678 /osshare vfat utf8,auto,rw,user 0 0

    Read the article

  • Nautilus uses different permissions for mounted drives

    - by farhad0011
    I've written two bash scripts to give read-only or read/write access to my NTFS partition: read-only access: sudo umount /media/Data_Drive/ sudo mount -t ntfs-3g -o ro,user,auto,nls=utf8,umask=0000,uid=1000 /dev/sda2 /media/Data_Drive read/write access: sudo umount /media/Data_Drive/ sudo mount -t ntfs-3g -o rw,user,auto,nls=utf8,umask=0000,uid=1000 /dev/sda2 /media/Data_Drive It works perfectly if I only use terminal to work with the files. It also works with Nautilus in read-only mode but not in the read/write mode. In fact, Nautilus gives me an error when I try to copy a file to Data_Drive saying "The destination is read-only". More funny, when I look at the permissions (by right-clicking on Data_Drive and then properties-permissions) I have all the required permissions to write a file in Data_Drive! I am so confused why Nautilus behaves strangely. I appreciate if anybody could solve the mystery!

    Read the article

  • How to support 3-byte UTF-8 Characters in ANT

    - by efelton
    I am trying to support UTF-8 characters in my ANT script. As long as the character string are made up of 2-byte UTF-8 characters, such as: Lògìñ Ùsèr ÌÐ Then things work fine. When I use Unicode Han Character: ? Which, according to this site: http://www.fileformat.info/info/unicode/char/6211/index.htm Has a UTF-8 encoding of 0xE6 0x88 0x91 I can see in UltraEdit, my input properties file has the values "E6 88 91" all in a row, so I'm fairly confident that my input is correct. And When I open the same file in Notepad++ I can see all the characters correctly. Here is my Build Script: <?xml version="1.0" encoding="UTF-8" ?> <project name="utf8test" default="all" basedir="."> <target name="all"> <loadproperties encoding="UTF-8" srcfile="./apps.properties.all.txt" /> <echo>No encoding ${common.app.name}</echo> <echo encoding="UTF-8">UTF-8 ${common.app.name}</echo> <echo encoding="UnicodeLittle">UnicodeLittle ${common.app.name}</echo> <echo encoding="UnicodeLittleUnmarked">UnicodeLittleUnmarked ${common.app.name}</echo> <echo>${common.app.ServerName}</echo> <echo>${bb.vendor}</echo> <echo>No encoding ${common.app.UserIdText}</echo> <echo encoding="UTF-8">UTF-8 ${common.app.UserIdText}</echo> <echo encoding="UnicodeLittle">UnicodeLittle ${common.app.UserIdText}</echo> <echo encoding="UnicodeLittleUnmarked">UnicodeLittleUnmarked ${common.app.UserIdText}</echo> <echoproperties /> </target> </project> And here is my properties file: common.app=VrvPsLTst common.app.name=?? common.app.description=Pseudo Loc Test App for Build Script testing common.app.ServerName=http://Vèrìvò.com bb.vendor=Vèrìvò common.app.PasswordText=Pàsswòrð bb.override.list=MP_COPYRIGHTTEXT, "Çòpÿrìght 2012 Vèrívó Bùîlð TéàM" common.app.LoginButtonText=Lògìñ common.app.UserIdText=Ùsèr ÌÐ bb.SMSSuccess=Mèssàgéß Sùççêssfúllÿ Sëñt common.app.LoginScreenMessage=WèlçòMé Mêssàgë common.app.LoginProgressMessage=Àùthèñtìçàtíòñ îñ prógréss... ios.RegistrationText=Règìstràtíòñ Téxt ios.RegistrationURL=http://www.josscrowcroft.com/2011/code/utf-8-multibyte-characters-in-url-parameters-%E2%9C%93/ Here is what the output looks like: Buildfile: C:\Temp\utf8\build.xml all: [echo] No encoding ?? [echo] UTF-8 ?? [echo] ÿþU n i c o d e L i t t l e ? ? [echo] U n i c o d e L i t t l e U n m a r k e d ? ? [echo] http://Vèrìvò.com [echo] Vèrìvò [echo] No encoding Ùsèr ÌÐ [echo] UTF-8 Ùsèr ÃŒÃ? [echo] ÿþU n i c o d e L i t t l e Ù s è r Ì Ð [echo] U n i c o d e L i t t l e U n m a r k e d Ù s è r Ì Ð [echoproperties] #Ant properties [echoproperties] #Mon Jun 18 15:25:13 EDT 2012 [echoproperties] ant.core.lib=C\:\\ant\\lib\\ant.jar [echoproperties] ant.file=C\:\\Temp\\utf8\\build.xml [echoproperties] ant.file.type=file [echoproperties] ant.file.type.utf8test=file [echoproperties] ant.file.utf8test=C\:\\Temp\\utf8\\build.xml [echoproperties] ant.home=c\:\\ant\\bin\\.. [echoproperties] ant.java.version=1.6 [echoproperties] ant.library.dir=C\:\\ant\\lib [echoproperties] ant.project.default-target=all [echoproperties] ant.project.invoked-targets=all [echoproperties] ant.project.name=utf8test [echoproperties] ant.version=Apache Ant version 1.8.1 compiled on April 30 2010 [echoproperties] awt.toolkit=sun.awt.windows.WToolkit [echoproperties] basedir=C\:\\Temp\\utf8 [echoproperties] bb.SMSSuccess=M\u00E8ss\u00E0g\u00E9\u00DF S\u00F9\u00E7\u00E7\u00EAssf\u00FAll\u00FF S\u00EB\u00F1t [echoproperties] bb.override.list=MP_COPYRIGHTTEXT, "\u00C7\u00F2p\u00FFr\u00ECght 2012 V\u00E8r\u00EDv\u00F3 B\u00F9\u00EEl\u00F0 T\u00E9\u00E0?" [echoproperties] bb.vendor=V\u00E8r\u00ECv\u00F2 [echoproperties] common.app=VrvPsLTst [echoproperties] common.app.LoginButtonText=L\u00F2g\u00EC\u00F1 [echoproperties] common.app.LoginProgressMessage=\u00C0\u00F9th\u00E8\u00F1t\u00EC\u00E7\u00E0t\u00ED\u00F2\u00F1 \u00EE\u00F1 pr\u00F3gr\u00E9ss... [echoproperties] common.app.LoginScreenMessage=W\u00E8l\u00E7\u00F2?\u00E9 M\u00EAss\u00E0g\u00EB [echoproperties] common.app.PasswordText=P\u00E0ssw\u00F2r\u00F0 [echoproperties] common.app.ServerName=http\://V\u00E8r\u00ECv\u00F2.com [echoproperties] common.app.UserIdText=\u00D9s\u00E8r \u00CC\u00D0 [echoproperties] common.app.description=Pseudo Loc Test App for Build Script testing [echoproperties] common.app.name=?? [echoproperties] file.encoding=Cp1252 [echoproperties] file.encoding.pkg=sun.io [echoproperties] file.separator=\\ [echoproperties] ios.RegistrationText=R\u00E8g\u00ECstr\u00E0t\u00ED\u00F2\u00F1 T\u00E9xt [echoproperties] ios.RegistrationURL=http\://www.josscrowcroft.com/2011/code/utf-8-multibyte-characters-in-url-parameters-%E2%9C%93/ [echoproperties] java.awt.graphicsenv=sun.awt.Win32GraphicsEnvironment [echoproperties] java.awt.printerjob=sun.awt.windows.WPrinterJob [echoproperties] java.class.path=c\:\\ant\\bin\\..\\lib\\ant-launcher.jar;C\:\\Temp\\utf8\\.\\;C\:\\Program Files (x86)\\Java\\jre7\\lib\\ext\\QTJava.zip;C\:\\ant\\lib\\ant-antlr.jar;C\:\\ant\\lib\\ant-apache-bcel.jar;C\:\\ant\\lib\\ant-apache-bsf.jar;C\:\\ant\\lib\\ant-apache-log4j.jar;C\:\\ant\\lib\\ant-apache-oro.jar;C\:\\ant\\lib\\ant-apache-regexp.jar;C\:\\ant\\lib\\ant-apache-resolver.jar;C\:\\ant\\lib\\ant-apache-xalan2.jar;C\:\\ant\\lib\\ant-commons-logging.jar;C\:\\ant\\lib\\ant-commons-net.jar;C\:\\ant\\lib\\ant-contrib-1.0b3.jar;C\:\\ant\\lib\\ant-jai.jar;C\:\\ant\\lib\\ant-javamail.jar;C\:\\ant\\lib\\ant-jdepend.jar;C\:\\ant\\lib\\ant-jmf.jar;C\:\\ant\\lib\\ant-jsch.jar;C\:\\ant\\lib\\ant-junit.jar;C\:\\ant\\lib\\ant-launcher.jar;C\:\\ant\\lib\\ant-netrexx.jar;C\:\\ant\\lib\\ant-nodeps.jar;C\:\\ant\\lib\\ant-starteam.jar;C\:\\ant\\lib\\ant-stylebook.jar;C\:\\ant\\lib\\ant-swing.jar;C\:\\ant\\lib\\ant-testutil.jar;C\:\\ant\\lib\\ant-trax.jar;C\:\\ant\\lib\\ant-weblogic.jar;C\:\\ant\\lib\\ant.jar;C\:\\ant\\lib\\bb-ant-tools.jar;C\:\\ant\\lib\\xercesImpl.jar;C\:\\ant\\lib\\xml-apis.jar;C\:\\Program Files\\Java\\jre7\\lib\\tools.jar [echoproperties] java.class.version=51.0 [echoproperties] java.endorsed.dirs=C\:\\Program Files\\Java\\jre7\\lib\\endorsed [echoproperties] java.ext.dirs=C\:\\Program Files\\Java\\jre7\\lib\\ext;C\:\\Windows\\Sun\\Java\\lib\\ext [echoproperties] java.home=C\:\\Program Files\\Java\\jre7 [echoproperties] java.io.tmpdir=C\:\\Users\\efelton\\AppData\\Local\\Temp\\ [echoproperties] java.library.path=C\:\\Windows\\SYSTEM32;C\:\\Windows\\Sun\\Java\\bin;C\:\\Windows\\system32;C\:\\Windows;C\:\\Windows\\SYSTEM32;C\:\\Windows;C\:\\Windows\\SYSTEM32\\WBEM;C\:\\Windows\\SYSTEM32\\WINDOWSPOWERSHELL\\V1.0\\;C\:\\PROGRAM FILES\\INTEL\\WIFI\\BIN\\;C\:\\PROGRAM FILES\\COMMON FILES\\INTEL\\WIRELESSCOMMON\\;C\:\\PROGRAM FILES (X86)\\MICROSOFT SQL SERVER\\100\\TOOLS\\BINN\\;C\:\\PROGRAM FILES\\MICROSOFT SQL SERVER\\100\\TOOLS\\BINN\\;C\:\\PROGRAM FILES\\MICROSOFT SQL SERVER\\100\\DTS\\BINN\\;C\:\\PROGRAM FILES (X86)\\MICROSOFT SQL SERVER\\100\\TOOLS\\BINN\\VSSHELL\\COMMON7\\IDE\\;C\:\\PROGRAM FILES (X86)\\MICROSOFT SQL SERVER\\100\\DTS\\BINN\\;C\:\\Program Files\\ThinkPad\\Bluetooth Software\\;C\:\\Program Files\\ThinkPad\\Bluetooth Software\\syswow64;C\:\\Program Files (x86)\\QuickTime\\QTSystem\\;C\:\\Program Files (x86)\\AccuRev\\bin;C\:\\Program Files\\Java\\jdk1.7.0_04\\bin;C\:\\Program Files (x86)\\IDM Computer Solutions\\UltraEdit\\;. [echoproperties] java.runtime.name=Java(TM) SE Runtime Environment [echoproperties] java.runtime.version=1.7.0_04-b22 [echoproperties] java.specification.name=Java Platform API Specification [echoproperties] java.specification.vendor=Oracle Corporation [echoproperties] java.specification.version=1.7 [echoproperties] java.vendor=Oracle Corporation [echoproperties] java.vendor.url=http\://java.oracle.com/ [echoproperties] java.vendor.url.bug=http\://bugreport.sun.com/bugreport/ [echoproperties] java.version=1.7.0_04 [echoproperties] java.vm.info=mixed mode [echoproperties] java.vm.name=Java HotSpot(TM) 64-Bit Server VM [echoproperties] java.vm.specification.name=Java Virtual Machine Specification [echoproperties] java.vm.specification.vendor=Oracle Corporation [echoproperties] java.vm.specification.version=1.7 [echoproperties] java.vm.vendor=Oracle Corporation [echoproperties] java.vm.version=23.0-b21 [echoproperties] line.separator=\r\n [echoproperties] os.arch=amd64 [echoproperties] os.name=Windows 7 [echoproperties] os.version=6.1 [echoproperties] path.separator=; [echoproperties] sun.arch.data.model=64 [echoproperties] sun.boot.class.path=C\:\\Program Files\\Java\\jre7\\lib\\resources.jar;C\:\\Program Files\\Java\\jre7\\lib\\rt.jar;C\:\\Program Files\\Java\\jre7\\lib\\sunrsasign.jar;C\:\\Program Files\\Java\\jre7\\lib\\jsse.jar;C\:\\Program Files\\Java\\jre7\\lib\\jce.jar;C\:\\Program Files\\Java\\jre7\\lib\\charsets.jar;C\:\\Program Files\\Java\\jre7\\lib\\jfr.jar;C\:\\Program Files\\Java\\jre7\\classes [echoproperties] sun.boot.library.path=C\:\\Program Files\\Java\\jre7\\bin [echoproperties] sun.cpu.endian=little [echoproperties] sun.cpu.isalist=amd64 [echoproperties] sun.desktop=windows [echoproperties] sun.io.unicode.encoding=UnicodeLittle [echoproperties] sun.java.command=org.apache.tools.ant.launch.Launcher -cp .;C\:\\Program Files (x86)\\Java\\jre7\\lib\\ext\\QTJava.zip [echoproperties] sun.java.launcher=SUN_STANDARD [echoproperties] sun.jnu.encoding=Cp1252 [echoproperties] sun.management.compiler=HotSpot 64-Bit Tiered Compilers [echoproperties] sun.os.patch.level=Service Pack 1 [echoproperties] user.country=US [echoproperties] user.dir=C\:\\Temp\\utf8 [echoproperties] user.home=C\:\\Users\\efelton [echoproperties] user.language=en [echoproperties] user.name=efelton [echoproperties] user.script= [echoproperties] user.timezone= [echoproperties] user.variant= BUILD SUCCESSFUL Total time: 1 second Thank you for your help EDIT\UPDATE 6/19/2012 I am developing in a Windows environment. I have installed a TTF from: http://freedesktop.org/wiki/Software/CJKUnifonts/Download I have updated UltraEdit to use the TTF and I can see the Chinese characters. <?xml version="1.0" encoding="UTF-8" ?> <project name="utf8test" default="all" basedir="."> <target name="all"> <echo>??</echo> <echo encoding="ISO-8859-1">ISO-8859-1 ??</echo> <echo encoding="UTF-8">UTF-8 ??</echo> <echo file="echo_output.txt" append="true" >?? ${line.separator}</echo> <echo file="echo_output.txt" append="true" encoding="ISO-8859-1">ISO-8859-1 ?? ${line.separator}</echo> <echo file="echo_output.txt" append="true" encoding="UTF-8">UTF-8 ?? ${line.separator}</echo> <echo file="echo_output.txt" append="true" encoding="UnicodeLittle">UnicodeLittle ?? ${line.separator}</echo> <echo file="echo_output.txt" append="true" encoding="UnicodeLittleUnmarked">UnicodeLittleUnmarked ?? ${line.separator}</echo> </target> </project> The output captured by running inside UltraEdit is: Buildfile: E:\temp\utf8\build.xml all: [echo] ?? [echo] ISO-8859-1 ?? [echo] UTF-8 ?? BUILD SUCCESSFUL Total time: 1 second And the echo_output.txt file shows up like this: ?? ISO-8859-1 ?? UTF-8 ?? ÿþU n i c o d e L i t t l e ? ? U n i c o d e L i t t l e U n m a r k e d ? ? So there appears to be somehting fundamentally wrong with how my ANT environment is set up since I cannot simply echo the character to the screen or to a file.

    Read the article

  • lftp cannot connecto to IIS

    - by ruyrocha
    Hello, I can not connect to IIS using lftp as you can see here: <--- 200 Language is now English, UTF-8 encoding. ---> OPTS UTF8 ON <--- 200 OPTS UTF8 command successful - UTF8 encoding now ON. ---> HOST x.x.x.x <--- 504 Server cannot accept argument. ---> USER bla <--- 331 Password required for hgtrf. ---> PASS blabla <--- 230 User logged in. ---> PWD <--- 257 "/" is current directory. ---> PBSZ 0 <--- 200 PBSZ command successful. ---> PROT P <--- 534 Policy denies SSL. ---> PASV <--- 227 Entering Passive Mode (x.x.x.x,194,118). ---- Connecting data socket to (x.x.x.x) port 49782 **** Socket error (Connection refused) - reconnecting ---> LIST ---> ABOR ---- Closing aborted data socket ---- Closing control socket I could connect, list, retrieve and send files using standard ftp command. Do you have any suggestion?

    Read the article

  • Problem with diacritics on psql 9.0 (PostgreSQL)

    - by Gaks
    I have two instances of PostgreSQL installed on my server: 8.3 and 9.0. There seams to be some problem with Polish diacritic characters (like óleaszzc) on postgresql 9.0 client - psql. When I connect to DB (either 8.3 or 9.0) with psql 8.3 - I can type all diacritics on the terminal without any problems: www:/tmp# sudo -u postgres /usr/lib/postgresql/8.3/bin/psql -q postgres=# ólscn However, when I connect to the same DBs with psql 9.0 client - I can't type diacritics on the terminal anymore: www:/tmp# sudo -u postgres /usr/lib/postgresql/8.3/bin/psql -q Here are some encoding settings: www:/tmp# sudo -u postgres /usr/lib/postgresql/9.0/bin/psql -q -c "show client_encoding" client_encoding ----------------- UTF8 (1 row) . www:/tmp# sudo -u postgres /usr/lib/postgresql/8.3/bin/psql -q -c "show client_encoding" client_encoding ----------------- UTF8 (1 row) . www:/tmp# sudo -u postgres /usr/lib/postgresql/9.0/bin/psql -q -l List of databases Name | Owner | Encoding | Collation | Ctype | Access privileges ---------------------+--------------+----------+-------------+-------------+----------------------- postgres | postgres | UTF8 | pl_PL.UTF-8 | pl_PL.UTF-8 | . www:/tmp# echo $LANG pl_PL.UTF-8 It looks like DB/cluster configuration doesn't matter - if psql 8.x on terminal works fine and psql 9.x does not. Any idea how to fix that?

    Read the article

  • SQL Error (1064) when importing data from SQL file

    - by mejpark
    I have a MySQL database, which was originally set up with the default latin1 character set and latin1_swedish_ci collation. I was using the database like this for sometime, until I noticed strange characters on my production web site, which is powered by a database exported from my development machine. At this point, I changed the default character set of the database and tables to utf8 and the collation to utf8_unicode_ci, converted the latin1 data inside each table to utf8 (using the 'convert data' option) and exported the database as a single SQL file using HeidiSQL. When the resulting SQL file is opened in Notepad++, several characters are rendered incorrectly. For example, en dashes (-) are displayed as – and e with accent (é) are displayed as é. I changed the encoding of the file from ANSI to UTF-8 (using the encoding menu option in Notepad++) and the offending characters are rendered correctly. I saved the new utf8-encoded SQL file and attempted to import the contents into the MySQL database on my production server. The import process fails with following error: /* SQL Error (1064): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '?# -------------------------------------------------------- # Host: ' at line 1 */ /* Error with snippets directory: The specified path was not found */ The head of the SQL file: # -------------------------------------------------------- # Host: 127.0.0.1 # Server version: 5.1.33-community # Server OS: Win32 # HeidiSQL version: 6.0.0.3773 # Date/time: 2011-04-20 09:48:36 # -------------------------------------------------------- It chokes on the first line of the file, which is commented out. Why is this happening? I didn't have a problem loading data from SQL files until I changed the character set and collation of the database. I came up with an ugly workaround to this problem by performing following steps: Export database as single SQL file using HeidiSQL Open resulting file in Notepad++ and convert from ANSI to UTF-8 encoding Create new empty file in Notepad++, paste in UTF-8 and save file normally What am I missing here?

    Read the article

  • Encoding Issue [NSFW]

    - by azz0r
    Hello, I am having issues correcting an encoding type issue on a site. Unfortunately the site is non work safe (gay porn). For the brave: http://www.alphamalemedia.com/index/news Ive tried setting the meta content from utf8 to iso-8859-1. Ive switched tables over to utf8 from latin1_swedish_ci but no luck.

    Read the article

  • Vietnamese character in .NET Console Application (UTF-8)

    - by DucDigital
    Im trying to write down an UTF8 string (Vietnamese) into C# Console but no success. Im running on windows 7. I tried to use the Encoding class that convert string to char[] to byte[] and then to String, but no help, the string is input directly fron the database. Here is some example Tôi tên là Ð?c, cu?c s?ng th?t vui v? tuy?t v?i It does not show the special character like : Ð or ?... instead it show up ?, much worse with the Encoding class. Does anyone can try this out or know about this problem? Thank you My code static void Main(string[] args) { XDataContext _new = new XDataContext(); Console.OutputEncoding = Encoding.GetEncoding("UTF-8"); string srcString = _new.Posts.First().TITLE; Console.WriteLine(srcString); // Convert the UTF-16 encoded source string to UTF-8 and ASCII. byte[] utf8String = Encoding.UTF8.GetBytes(srcString); byte[] asciiString = Encoding.ASCII.GetBytes(srcString); // Write the UTF-8 and ASCII encoded byte arrays. Console.WriteLine("UTF-8 Bytes: {0}", BitConverter.ToString(utf8String)); Console.WriteLine("ASCII Bytes: {0}", BitConverter.ToString(asciiString)); // Convert UTF-8 and ASCII encoded bytes back to UTF-16 encoded // string and write. Console.WriteLine("UTF-8 Text : {0}", Encoding.UTF8.GetString(utf8String)); Console.WriteLine("ASCII Text : {0}", Encoding.ASCII.GetString(asciiString)); Console.WriteLine(Encoding.UTF8.GetString(utf8String)); Console.WriteLine(Encoding.ASCII.GetString(asciiString)); } and here is the outstanding output Nhà báo Ä‘i há»™i báo Xuân UTF-8 Bytes: 4E-68-C3-A0-20-62-C3-A1-6F-20-C4-91-69-20-68-E1-BB-99-69-20-62-C3- A1-6F-20-58-75-C3-A2-6E ASCII Bytes: 4E-68-3F-20-62-3F-6F-20-3F-69-20-68-3F-69-20-62-3F-6F-20-58-75-3F- 6E UTF-8 Text : Nhà báo Ä‘i há»™i báo Xuân ASCII Text : Nh? b?o ?i h?i b?o Xu?n Nhà báo Ä‘i há»™i báo Xuân Nh? b?o ?i h?i b?o Xu?n Press any key to continue . . .

    Read the article

  • Decoding tcp packets using python

    - by mikip
    Hello I am trying to decode data received over a tcp connection. The packets are small, no more than 100 bytes. However when there is a lot of them I receive some of the the packets joined together. Is there a way to prevent this. I am using python I have tried to separate the packets, my source is below. The packets start with STX byte and end with ETX bytes, the byte following the STX is the packet length, (packet lengths less than 5 are invalid) the checksum is the last bytes before the ETX def decode(data): while True: start = data.find(STX) if start == -1: #no stx in message pkt = '' data = '' break #stx found , next byte is the length pktlen = ord(data[1]) #check message ends in ETX (pktken -1) or checksum invalid if pktlen < 5 or data[pktlen-1] != ETX or checksum_valid(data[start:pktlen]) == False: print "Invalid Pkt" data = data[start+1:] continue else: pkt = data[start:pktlen] data = data[pktlen:] break return data , pkt I use it like this #process reports try: data = sock.recv(256) except: continue else: while data: data, pkt = decode(data) if pkt: process(pkt) Also if there are multiple packets in the data stream, is it best to return the packets as a collection of lists or just return the first packet I am not that familiar with python, only C, is this method OK. Any advice would be most appreciated. Thanks in advance Thanks

    Read the article

  • What is the proper way to URL encode Unicode characters?

    - by Josh Gibson
    I know of the non-standard %uxxxx scheme but that doesn't seem like a wise choice since the scheme has been rejected by the W3C. Some interesting examples: The heart character. If I type this into my browser: http://www.google.com/search?q=? Then copy and paste it, I see this URL http://www.google.com/search?q=%E2%99%A5 which makes it seem like Firefox (or Safari) is doing this. urllib.quote_plus(x.encode("latin-1")) '%E2%99%A5' which makes sense, except for things that can't be encoded in Latin-1, like the triple dot character. … If I type the URL http://www.google.com/search?q=… into my browser then copy and paste, I get http://www.google.com/search?q=%E2%80%A6 back. Which seems to be the result of doing urllib.quote_plus(x.encode("utf-8")) which makes sense since … can't be encoded with Latin-1. But then its not clear to me how the browser knows whether to decode with UTF-8 or Latin-1. Since this seems to be ambiguous: In [67]: u"…".encode('utf-8').decode('latin-1') Out[67]: u'\xc3\xa2\xc2\x80\xc2\xa6' works, so I don't know how the browser figures out whether to decode that with UTF-8 or Latin-1. What's the right thing to be doing with the special characters I need to deal with?

    Read the article

  • Problem in encoding and decoding the string in Iphone sdk

    - by monish
    HI Guys, Here I am having a problem In encoding/decoding the strings. Actually I had a string which I am encoding it using the base64.which was working fine. And now I need to decode the string that was encoded before and want to print it. I code I written as: I imported the base64.h and base64.m files into my application which contains the methods as: + (NSData *) dataWithBase64EncodedString:(NSString *) string; - (id) initWithBase64EncodedString:(NSString *) string; - (NSString *) base64EncodingWithLineLength:(unsigned int) lineLength; And the code in my view controller where I encode the String is: - (id)init { if (self = [super init]) { // Custom initialization userName = @"Sekhar"; password = @"Bethalam"; } return self; } -(void)reloadView { NSString *authStr = [NSString stringWithFormat:@"%@:%@",userName,password]; NSData *authData = [authStr dataUsingEncoding:NSASCIIStringEncoding]; NSString *authValue = [NSString stringWithFormat:@"%@", [authData base64EncodingWithLineLength:30]]; NSLog(authValue); //const char *str = authValue; //NSString *decStr = [StringEncryption DecryptString:authValue]; //NSLog(decStr); //NSData *decodeData = [NSData decode:authValue]; //NSString *decStr = [NSString stringWithFormat:@"%@",decodeData]; //NSStr //NSLog(decStr); } -(void)viewWillAppear:(BOOL)animated { [self reloadView]; } and now I want to decode the String that I encoded. But I dont know How to do that.can anyone suggest me with code how to get it. Anyone's help will be much appreciated. Thank you, Monish.

    Read the article

  • Is it possible to create your own custom locale

    - by smerlin
    Since Windows doesnt have a C++ locale with UTF8 support by default, i would like to construct a custom locale object which supports UTF8 (by creating it with a custom ctype facet). How can i construct a locale object with a my own ctype implementation (i only found functions to construct a locale using an already existing locale as base..) If C++ does not support construction of locales with a custom ctype facet at all, why is that so ?

    Read the article

  • When is it better to use a method versus a property for a class definition?

    - by ccomet
    Partially related to an earlier question of mine, I have a system in which I have to store complex data as a string. Instead of parsing these strings as all kinds of separate objects, I just created one class that contains all of those objects, and it has some parser logic that will encode all properties into strings, or decode a string to get those objects. That's all fine and good. This question is not about the parser itself, but about where I should house the logic for the parser. Is it a better choice to put it as a property, or as a method? In the case of a property, say public string DataAsString, the get accessor would house the logic to encode all of the data into a string, while the set accessor would decode the input value and set all of the data in the class instance. It seems convenient because the input/output is indeed a string. In the case of a method, one method would be Encode(), which returns the encoded string. Then, either the constructor itself would house the logic for the decoding a string and require the string argument, or I write a Decode(string str) method which is called separately. In either case, it would be using a method instead of a property. So, is there a functional difference between these paths, in terms of the actual running of the code? Or are they basically equivalent and it then boils down to a choice of personal preference or which one looks better? And in that kind of question... which would look cleaner anyway?

    Read the article

  • Protocol specific channel handlers

    - by Mickael Marrache
    I'm writing an application server that will receive SIP and DNS messages from the network. When I receive a message from the network, I understand from the documentation that at first, I get a ChannelBuffer. I would like to determine which kind of message has been received (SIP or DNS) and to decode it. To determine the message type, I can dedicate port to each type of message, but I would be interested to know if there exist another solution for that. My question is more about how to decode the ChannelBuffer. Is there a ChannelHandler provided by Netty to decode SIP or DNS messages? If not, what would be the right place in the type hierarchy to write my custom ChannelHandler? To illustrate my question, let's take as example the HttpRequestDecoder, the hierarchy is: java.lang.Object org.jboss.netty.channel.SimpleChannelUpstreamHandler org.jboss.netty.handler.codec.frame.FrameDecoder org.jboss.netty.handler.codec.replay.ReplayingDecoder<HttpMessageDecoder.State> org.jboss.netty.handler.codec.http.HttpMessageDecoder org.jboss.netty.handler.codec.http.HttpRequestDecoder Also, do I need to use two different ChannelHandler for decoding and encoding, or is there a possibility to use a single ChannelHandler for both? Thanks

    Read the article

  • How to convert from hex-encoded string to a "human readable" string?

    - by John Jensen
    I'm using the Net-SNMP bindings for python and I'm attempting to grab an ARP cache from a Brocade switch. Here's what my code looks like: #!/usr/bin/env python import netsnmp def get_arp(): oid = netsnmp.VarList(netsnmp.Varbind('ipNetToMediaPhysAddress')) res = netsnmp.snmpwalk(oid, Version=2, DestHost='10.0.1.243', Community='public') return res arp_table = get_arp() print arp_table The SNMP code itself is working fine. Output from snmpwalk looks like this: <snip> IP-MIB::ipNetToMediaPhysAddress.128.10.200.6.158 = STRING: 0:1b:ed:a3:ec:c1 IP-MIB::ipNetToMediaPhysAddress.129.10.200.6.162 = STRING: 0:1b:ed:a4:ac:c1 IP-MIB::ipNetToMediaPhysAddress.130.10.200.6.166 = STRING: 0:1b:ed:38:24:1 IP-MIB::ipNetToMediaPhysAddress.131.10.200.6.170 = STRING: 74:8e:f8:62:84:1 </snip> But my output from the python script yields a tuple of hex-encoded strings that looks like this: ('\x00$8C\x98\xc1', '\x00\x1b\xed;_A', '\x00\x1b\xed\xb4\x8f\x81', '\x00$86\x15\x81', '\x00$8C\x98\x81', '\x00\x1b\xed\x9f\xadA', ...etc) I've spent some time googling and came across the struct module and the .decode("hex") string method, but the .decode("hex") method doesn't seem to work: Python 2.7.3 (default, Apr 10 2013, 06:20:15) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> hexstring = '\x00$8C\x98\xc1' >>> newstring = hexstring.decode("hex") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/encodings/hex_codec.py", line 42, in hex_decode output = binascii.a2b_hex(input) TypeError: Non-hexadecimal digit found >>> And the documentation for struct is a bit over my head.

    Read the article

  • sql query --need some suggestions

    - by benjamin button
    I have a table with list of cycle codes.CYCLE_DEFINITION. each and every cycle_code has 12 months entries in another table(PM1_CYCLE_STATE). Each and every month has a cycle_start_date and a cycle_close_date. i will check with a particular date(lets say sysdate) and check what is the current month of every cycle.additionally i will also get the list of future 3 more months of that particular cycle. the query i have written is as below: SELECT cycd,cm,sd,ed,ld FROM (SELECT pcs.cycle_code CYCD,LTRIM(pcs.cycle_month,'0')+0 CM, pcs.cycle_start_date SD,pcs.cycle_close_date ED,ld.logical_date LD FROM pm1_cycle_state pcs,logical_date ld WHERE ld.logical_date BETWEEN pcs.cycle_start_date AND pcs.cycle_close_date and ld.logical_date_type='B') UNION SELECT cycd,cm,sd,ed,ld FROM (SELECT pcs.cycle_code CYCD,DECODE(LTRIM(pcs.cycle_month,'0')+1,13,1,14,2,15,3,LTRIM(pcs.cycle_month,'0')+1) CM ,pcs.cycle_start_date SD,pcs.cycle_close_date ED,ld.logical_date LD FROM pm1_cycle_state pcs,logical_date ld WHERE ld.logical_date BETWEEN pcs.cycle_start_date AND pcs.cycle_close_date and ld.logical_date_type='B') UNION SELECT cycd,cm,sd,ed,ld FROM (SELECT pcs.cycle_code CYCD,DECODE(LTRIM(pcs.cycle_month,'0')+2,13,1,14,2,15,3,LTRIM(pcs.cycle_month,'0')+2) CM ,pcs.cycle_start_date SD,pcs.cycle_close_date ED,ld.logical_date LD FROM pm1_cycle_state pcs,logical_date ld WHERE ld.logical_date BETWEEN pcs.cycle_start_date AND pcs.cycle_close_date and ld.logical_date_type='B') UNION SELECT cycd,cm,sd,ed,ld FROM (SELECT pcs.cycle_code CYCD,DECODE(LTRIM(pcs.cycle_month,'0')+3,13,1,14,2,15,3,LTRIM(pcs.cycle_month,'0')+3) CM ,pcs.cycle_start_date SD,pcs.cycle_close_date ED,ld.logical_date LD FROM pm1_cycle_state pcs,logical_date ld WHERE ld.logical_date BETWEEN pcs.cycle_start_date AND pcs.cycle_close_date and ld.logical_date_type='B') This query is running perfectly fine. This will result in all the cycle_codes with exactly 4 rows for current month and future 3 months. Now the requirement is if any of the month is missing.how could i show it? for eg: the output of the above query is cycd cm 102 1 102 10 102 11 102 12 103 1 103 10 103 11 103 12 104 1 104 10 104 11 104 12 Now lets say the row with cycd=104 and cm=11 is not present in the table,then the above query will not get the row 104 11. I want to display only those rows. how could i do it?

    Read the article

  • UTF-8 HTML and CSS files with BOM (and how to remove the BOM with Python)

    - by Cameron
    First, some background: I'm developing a web application using Python. All of my (text) files are currently stored in UTF-8 with the BOM. This includes all my HTML templates and CSS files. These resources are stored as binary data (BOM and all) in my DB. When I retrieve the templates from the DB, I decode them using template.decode('utf-8'). When the HTML arrives in the browser, the BOM is present at the beginning of the HTTP response body. This generates a very interesting error in Chrome: Extra <html> encountered. Migrating attributes back to the original <html> element and ignoring the tag. Chrome seems to generate an <html> tag automatically when it sees the BOM and mistakes it for content, making the real <html> tag an error. So, using Python, what is the best way to remove the BOM from my UTF-8 encoded templates (if it exists -- I can't guarantee this in the future)? For other text-based files like CSS, will major browsers correctly interpret (or ignore) the BOM? They are being sent as plain binary data without .decode('utf-8'). Note: I am using Python 2.5. Thanks!

    Read the article

  • Oracle SQL: ROLLUP not summing correctly

    - by tommy-o-dell
    Hi guys, Rollup seems to be working correcly to count the number of units, but not the number of trains. Any idea what could be causing that? The output from the query looks like this. The sum of the Units column in yellow is 53 but the rollup is showing 51. The number of units adds up correctly though... And here's the oracle SQL query... select t.year, t.week, decode(t.mine_id,NULL,'PF',t.mine_id) as mine_id, decode(t.product,Null,'LF',t.product) as product, decode(t.mine_id||'-'||t.product,'-','PF',t.mine_id||'-'||t.product) as code, count(distinct t.tpps_train_id) as trains, count(1) as units from ( select trn.mine_code as mine_id, trn.train_tpps_id as tpps_train_id, round((con.calibrated_weight_total - con.empty_weight_total),2) as tonnes from widsys.train trn INNER JOIN widsys.consist con USING (train_record_id) where trn.direction = 'N' and (con.calibrated_weight_total-con.empty_weight_total) > 10 and trn.num_cars > 10 and con.consist_no not like '_L%' ) w, ( select to_char(td.datetime_act_comp_dump-7/24, 'IYYY') as year, to_char(td.datetime_act_comp_dump-7/24, 'IW') as week, td.mine_code as mine_id, td.train_id as tpps_train_id, pt.product_type_code as product from tpps.train_details td inner join tpps.ore_products op using (ore_product_key) inner join tpps.product_types pt using (product_type_key) where to_char(td.datetime_act_comp_dump-7/24, 'IYYY') = 2010 and to_char(td.datetime_act_comp_dump-7/24, 'IW') = 12 order by td.datetime_act_comp_dump asc ) t where w.mine_id = t.mine_id and w.tpps_train_id = t.tpps_train_id having t.product is not null or t.mine_id is null group by t.year, t.week, rollup( t.mine_id, t.product)

    Read the article

  • Encoding Issue [NWS]

    - by azz0r
    Hello, I am having issues correcting an encoding type issue on a site. Unfortunately the site is non work safe (gay porn). For the brave: http://www.alphamalemedia.com/index/news Ive tried setting the meta content from utf8 to iso-8859-1. Ive switched tables over to utf8 from latin1_swedish_ci but no luck.

    Read the article

  • Incorrect string encodings

    - by James
    Note: I have read all of the related PHP, UTF-8, character encoding articles that are usually suggested, but my question relates to data inserted before I applied such techniques. I am wishing to retrospectively fix all character encoding problems. Now all connections are set as utf8 using PDO. PDO::MYSQL_ATTR_INIT_COMMAND => 'SET NAMES utf8' Unfortunately, a large amount of data was inserted that is of questionable encoding before I had implemented correct character encoding practices. As displayed by: $sql = "SELECT name FROM data LIMIT 3"; foreach ($pdo->query($sql) as $row) { $name = $row['name']; echo $name . "\n"; echo utf8_encode($name) . "\n"; echo utf8_decode($name) . "\n"; echo htmlspecialchars($name, ENT_QUOTES, 'UTF-8') . "\n"; echo htmlspecialchars(utf8_encode($name), ENT_QUOTES, 'UTF-8') . "\n"; echo htmlspecialchars(utf8_decode($name), ENT_QUOTES, 'UTF-8') . "\n"; echo '<hr/>'; } Which produces: Antonín Dvořák AntonÃÆín DvoÃâ¦Ãâ¢ÃÆák Anton??­n Dvo??????¡k Antonín Dvořák AntonÃÆín DvoÃâ¦Ãâ¢ÃÆák ---------- Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶ ñÃâ¬Ã¡Ã´ ýáùáÿÃâ¬ÃµÃ¡Ã¶ ????? ?????????? Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶ ñÃâ¬Ã¡Ã´ ýáùáÿÃâ¬ÃµÃ¡Ã¶ ---------- Tiësto Tiësto Tiësto Tiësto Tiësto Tiësto ---------- When removing 'SET NAMES utf8' with PDO it produces the data: Antonín DvoÅák Antonín DvoÃÂák Antonín Dvorák Antonín DvoÅák Antonín DvoÃÂák Antonín Dvorák ---------- ???? ????????? Ô±ÖÕ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿ÖÕµÕ¡Õ¶ ???? ????????? ???? ????????? Ô±ÖÕ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿ÖÕµÕ¡Õ¶ ???? ????????? ---------- Tiësto Tiësto Ti?sto Tiësto Tiësto ---------- And here is a dump of the database rows concerned: DROP TABLE IF EXISTS `data`; CREATE TABLE IF NOT EXISTS `data` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(80) NOT NULL, PRIMARY KEY (`id`), KEY `name` (`name`(10)), ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=0; INSERT INTO `data` (`id`, `name`) VALUES (0, 'Antonín Dvořák'), (1, 'Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶'), (2, 'Tiësto'); The 3rd and 6th lines of the 3rd row "Tiësto" are then correctly echoed. I'm just unsure what is the best way to correct encodings/detect the encodings of bad strings and correct, etc.

    Read the article

  • Outputing UTF-8 string on Mac OS's Terminal

    - by SuperBloup
    I got a programm in haskell outputting utf-8 using the package utf8-string and using only the output functions of this package. I set the encoding of each file I write to this way : hSetEncoding myFile utf8 {- myFile may be stdout -} but when I try to output : alpha = [fromEnum 0x03B1] {- a -} instead of the nice alpha letter I got on Linux (or in a file on windows), I got the following : α The weird thing is even if I try to write the output on a file, I can't read it back with mvim as an utf-8 file. Is there any way to get the correct behaviour

    Read the article

  • Linux - Scriptable email client?

    - by Phog
    Hi, I'm writing a simple UI for the visually impaired using a speech synthesizer. I've looked all over the internet for an email client which I can script to fit these purposes but to no avail. I believe several CLI email clients(eg MUTT) allow sending emails with command line arguments only. But I've yet to find a client that can download the emails, decode them and then dump them to a text file. The best candidate so far seems to be mailx, but it seems like it needs quite a lot of babysitting to fit my needs. Any suggestions for scripting-friendly email clients? Am I missing something fundamental about MUTT? Are there any libraries/programs that help me decode the MIME encoding used in todays emails from a maildir? Should I just bite the bullet and write a script for mailx? Thanks in advance.

    Read the article

  • Can not login Dashboard / Unable to find the server at mykeystoneurl

    - by neo0
    I installed Dashboard following this guide: http://wiki.openstack.org/OpenStackDashboard Everything fine, but when I run the server, I can not login with the username and password in DATABASE config in local_settings.py. Here's my config: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dashboarddb', 'USER': 'nova', 'PASSWORD': 'nova', 'HOST': 'localhost', 'default-character-set': 'utf8' }, } When I run the Dashboard server and enter username + password. It returned this error on browser: Unable to find the server at mykeystoneurl (HTTP 400) And in the command line: DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. Validating models... 0 errors found Django version 1.3.1, using settings 'openstack_dashboard.settings' Development server is running at http://0.0.0.0:8888/ Quit the server with CONTROL-C. Request returned failure status. Traceback (most recent call last): File "/home/us/horizon/.venv/src/python-keystoneclient/keystoneclient/client.py", line 121, in request body = json.loads(body) File "/usr/lib/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded [06/Mar/2012 15:20:03] "POST /auth/login/ HTTP/1.1" 200 3735 I also tried login as "admin" with password is "password" or "secrete" but I didn't work. What's wrong? Thank you!

    Read the article

  • How to squeeze the maximum performance out of Unity and GNOME 3?

    - by melvincv
    I see that I do not get good performance with the new Unity desktop, but I should say that Unity has improved a lot since the last edition Ubuntu 11.10. How to squeeze the maximum performance out of 1. Unity 2. GNOME 3 My system specs: -Processors- Intel(R) Pentium(R) Dual CPU E2180 @ 2.00GHz -Memory- Total Memory : 2049996 kB -PCI Devices- Host bridge : Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 10) PCI bridge : Intel Corporation 82G33/G31/P35/P31 Express PCI Express Root Port (rev 10) (prog-if 00 [Normal decode]) VGA compatible controller : Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 10) (prog-if 00 [VGA controller]) USB controller : Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) (prog-if 00 [UHCI]) USB controller : Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) (prog-if 00 [UHCI]) USB controller : Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) (prog-if 00 [UHCI]) USB controller : Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) (prog-if 00 [UHCI]) USB controller : Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) (prog-if 20 [EHCI]) PCI bridge : Intel Corporation 82801 PCI Bridge (rev e1) (prog-if 01 [Subtractive decode]) ISA bridge : Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) IDE interface : Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01) (prog-if 8a [Master SecP PriP]) IDE interface : Intel Corporation N10/ICH7 Family SATA Controller [IDE mode] (rev 01) (prog-if 8f [Master SecP SecO PriP PriO]) SMBus : Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) Ethernet controller : Intel Corporation PRO/100 VE Network Connection (rev 01)

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >