Search Results

Search found 1490 results on 60 pages for 'rxvt unicode'.

Page 15/60 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Cisco FWSM -> ASA upgrade broke our mail server

    - by Mike Pennington
    We send mail with unicode asian characters to our mail server on the other side of our WAN... immediately after upgrading from a FWSM running 2.3(2) to an ASA5550 running 8.2(5), we saw failures on mail jobs that contained unicode. The symptoms are pretty clear... using the ASA's packet capture utility, we snagged the traffic before and after it left the ASA... access-list PCAP line 1 extended permit tcp any host 192.0.2.25 eq 25 capture pcap_inside type raw-data access-list PCAP buffer 1500000 packet-length 9216 interface inside capture pcap_outside type raw-data access-list PCAP buffer 1500000 packet-length 9216 interface WAN I downloaded the pcaps from the ASA by going to https://<fw_addr>/pcap_inside/pcap and https://<fw_addr>/pcap_outside/pcap... when I looked at them with Wireshark Follow TCP Stream, the inside traffic going into the ASA looks like this EHLO metabike AUTH LOGIN YzFwbUlciXNlck== cZUplCVyXzRw But the same mail leaving the ASA on the outside interface looks like this... EHLO metabike AUTH LOGIN YzFwbUlciXNlck== XXXXXXXXXXXX The XXXX characters are concerning... I fixed the issue by disabling ESMTP inspection: wan-fw1(config)# policy-map global_policy wan-fw1(config-pmap)# class inspection_default wan-fw1(config-pmap-c)# no inspect esmtp wan-fw1(config-pmap-c)# end The $5 question... our old FWSM used SMTP fixup without issues... mail went down at the exact moment that we brought the new ASAs online... what specifically is different about the ASA that it is now breaking this mail? Note: usernames / passwords / app names were changed... don't bother trying to Base64-decode this text.

    Read the article

  • Python encoding for pipe.communicate

    - by Brian M. Hunt
    I'm calling pipe.communicate from Python's subprocess module from Python 2.6. I get the following error from this code: from subprocess import Popen pipe = Popen(cwd) pipe.communicate( data ) For an arbitrary cwd, and where data that contains unicode (specifically 0xE9): Exec. exception: 'ascii' codec can't encode character u'\xe9' in position 507: ordinal not in range(128) Traceback (most recent call last): ... stdout, stderr = pipe.communicate( data ) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 671, in communicate return self._communicate(input) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 1177, in _communicate bytes_written = os.write(self.stdin.fileno(), chunk) This is happening, I presume, because pipe.communicate() is expecting ASCII encoded string, but data is unicode. Is this the problem I'm encountering, and i sthere a way to pass unicode to pipe.communicate()? Thank you for reading! Brian

    Read the article

  • Load JSON in Python as header character set

    - by mridang
    Hi everyone, I've always found character sets and encodings complicated to understand and here I'm faced with another problem. My apologies for any inaccuracies. I'll do my best. I'm requesting data from a server which returns JSON. In the HTTP headers it also returns the character set like so: Content-Type: text/html; charset=UTF-8 I'm using the JSON library in Python to load the JSON using the json.loads method. When I pass it the returned JSON, it gives me a dictionary in Unicode. I've Googled around and I know that JSON should return Unicode as JavaScript strings are Unicode objects. How can I load the JSON as UTF-8? I would like to use the same encoding as specified in the response header. I've read this post but it didn't help. Thank you.

    Read the article

  • Load JSON in Python as header chracterset

    - by mridang
    Hi everyone, I've always found character-sets and encodings complicated to understand and here I'm faced with another problem. My apologies for any inaccuracies. I'll do my best. I'm requesting data from a server which returns JSON. In the HTTP headers it also returns the character.set like so: Content-Type: text/html; charset=UTF-8 I'm using the JSON library in python to load the JSON using the json.loads method. When I pass it the returned JSON, it gives me a dictionary in Unicode. I've Googled around and I know that JSON should return Unicode as JavaScript strings are Unicode objects. How can I load the JSON as UTF-8. I would like to use the same encoding as specified in the response header. I've read this post but it didn't help. Thank you.

    Read the article

  • How would you create a string of all UTF-8 characters? [PHP]

    - by Xeoncross
    There are many ways to represent the +1 million UTF-8 characters. Take the latin capital "A" with macron (A). This is unicode code point U+0100, hex number 0xc4 0x80, decimal number 196 128, and binary 11000100 10000000. I would like to create a collection of the first 65,535 UTF-8 characters for use in testing applications. These are all unicode characters up to code point U+FFFF (byte3). Is it possible to do something like a for($x=0) loop and then convert the resulting decimal to another base (like hex) which would allow the creation of the matching unicode character? I can create the value A using something like this: $char = "\xc4\x80"; // or $char = chr(196).chr(128); However, I am not sure how to turn this into an automated process. // fail! $char = "\x". dechex($a). "\x". dexhex($$b);

    Read the article

  • Is Embed Resource a good approach for a read only xml database?

    - by Nasser Hajloo
    I have an open source application (here) This application get a character or a sentence and give some unicode information about it. Iuse Unicode Character Database which provided by Unicode.org this is a XML document (130MB) At first I embed this XML to my DLL but I don't know is it a good approach or no. because DLL size growth just because of this XML document. I can use it like any other resources but usercan see it. What Should I do? What is the best pattern for this? and Why ? TIA

    Read the article

  • How do I read UTF-8 characters via a pointer?

    - by Jen
    Suppose I have UTF-8 content stored in memory, how do I read the characters using a pointer? I presume I need to watch for the 8th bit indicating a multi-byte character, but how exactly do I turn the sequence into a valid Unicode character? Also, is wchar_t the proper type to store a single Unicode character? This is what I have in mind: wchar_t readNextChar (char** p) { char ch = *p++; if (ch & 128) { // This is a multi-byte character, what do I do now? // char chNext = *p++; // ... but how do I assemble the Unicode character? ... } ... }

    Read the article

  • Word Opening Text Files by default as RTL, should be LTR

    - by wonea
    I've noticed recently that Microsoft Word 2007 on Windows XP is opening by default straight text files as RTL instead of LTR. These are English written files, and contain no Unicode or characters other than ASCII. I work in three or four languages on the computer so have the language bar open, this problem happens when the language is selected to EN (United Kingdom). Is there a setting I'm missing somewhere, perhaps in Word itself?

    Read the article

  • Mathematica regular expressions on unicode strings.

    - by dreeves
    This was a fascinating debugging experience. Can you spot the difference between the following two lines? StringReplace["–", RegularExpression@"[\\s\\S]" -> "abc"] StringReplace["-", RegularExpression@"[\\s\\S]" -> "abc"] They do very different things when you evaluate them. It turns out it's because the string being replaced in the first line consists of a unicode en dash, as opposed to a plain old ascii dash in the second line. In the case of the unicode string, the regular expression doesn't match. I meant the regex "[\s\S]" to mean "match any character (including newline)" but Mathematica apparently treats it as "match any ascii character". How can I fix the regular expression so the first line above evaluates the same as the second? Alternatively, is there an asciify filter I can apply to the strings first? PS: The Mathematica documentation says that its string pattern matching is built on top of the Perl-Compatible Regular Expressions library (http://pcre.org) so the problem I'm having may not be specific to Mathematica.

    Read the article

  • Opening a Unicode file with Perl

    - by Jaco Pretorius
    I'm using osql to run several sql scripts against a database and then I need to look at the results file to check if any errors occurred. The problem is that perl doesn't seem to like the fact that the results files are unicode. I wrote a little test script to test it and the output comes out all warbled. $file = shift; open OUTPUT, $file or die "Can't open $file: $!\n"; while (<OUTPUT>) { print $_; if (/Invalid|invalid|Cannot|cannot/) { push(@invalids, $file); print "invalid file - $inputfile - schedule for retry\n"; last; } } Any ideas? I've tried decoding using decode_utf8 but it makes no difference. I've also tried to set the encoding when opening the file. I think the problem might be that osql puts the result file in UTF-16 format, but I'm not sure. When I open the file in textpad it just tells me 'Unicode'. Edit: Using perl v5.8.8

    Read the article

  • Orbited exception Data must not be unicode.

    - by Sid
    I am working with orbited and once I switch on orbited in production mode it throws the following error on my screen -- <exception caught here> --- File "/usr/lib/python2.6/dist-packages/twisted/web/server.py", line 150, in process self.render(resrc) File "/usr/lib/python2.6/dist-packages/twisted/web/server.py", line 157, in render body = resrc.render(self) File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/transports/base.py", line 21, in render self.conn.transportOpened(self) File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/cometsession.py", line 322, in transportOpened self.cometTransport.flush() File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/transports/base.py", line 45, in flush self.write(self.packets) File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/transports/htmlfile.py", line 42, in write self.request.write(payload); File "/usr/lib/python2.6/dist-packages/twisted/web/http.py", line 862, in write self.transport.write(data) File "/usr/lib/python2.6/dist-packages/twisted/internet/tcp.py", line 420, in write abstract.FileDescriptor.write(self, bytes) File "/usr/lib/python2.6/dist-packages/twisted/internet/abstract.py", line 170, in write raise TypeError("Data must not be unicode") exceptions.TypeError: Data must not be unicode I have absolutely no clue as to what could be the problem. Could anyone point me in the right direction.

    Read the article

  • How do you get Matlab to write the BOM (byte order markers) for UTF-16 text files?

    - by Richard Povinelli
    I am creating UTF16 text files with Matlab, which I am later reading in using Java. In Matlab, I open a file called fileName and write to it as follows: fid = fopen(fileName, 'w','n','UTF16-LE'); fprintf(fid,"Some stuff."); In Java, I can read the text file using the following code: FileInputStream fileInputStream = new FileInputStream(fileName); Scanner scanner = new Scanner(fileInputStream, "UTF-16LE"); String s = scanner.nextLine(); Here is the hex output: Offset(h) 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 00000000 73 00 6F 00 6D 00 65 00 20 00 73 00 74 00 75 00 66 00 66 00 s.o.m.e. .s.t.u.f.f. The above approach works fine. But, I want to be able to write out the file using UTF16 with a BOM to give me more flexibility so that I don't have to worry about big or little endian. In Matlab, I've coded: fid = fopen(fileName, 'w','n','UTF16'); fprintf(fid,"Some stuff."); In Java, I change the code to: FileInputStream fileInputStream = new FileInputStream(fileName); Scanner scanner = new Scanner(fileInputStream, "UTF-16"); String s = scanner.nextLine(); In this case, the string s is garbled, because Matlab is not writing the BOM. I can get the Java code to work just fine if I add the BOM manually. With the added BOM, the following file works fine. Offset(h) 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 00000000 FF FE 73 00 6F 00 6D 00 65 00 20 00 73 00 74 00 75 00 66 00 66 00 ÿþs.o.m.e. .s.t.u.f.f. How can I get Matlab to write out the BOM? I know I could write the BOM out separately, but I'd rather have Matlab do it automatically. Addendum I selected the answer below from Amro because it exactly solves the question I posed. One key discovery for me was the difference between the Unicode Standard and a UTF (Unicode transformation format) (see http://unicode.org/faq/utf_bom.html). The Unicode Standard provides unique identifiers (code points) for characters. UTFs provide mappings of every code point "to a unique byte sequence." Since all but a handful of the characters I am using are in the first 128 code points, I'm going to switch to using UTF-8 as Romeo suggests. UTF-8 is supported by Matlab (The warning shown below won't need to be suppressed.) and Java, and for my application will generate smaller text files. I suppress the Matlab warning Warning: The encoding 'UTF-16LE' is not supported. with warning off MATLAB:iofun:UnsupportedEncoding;

    Read the article

  • Dealing with wacky encodings in Python

    - by Tyson
    I have a Python script that pulls in data from many sources (databases, files, etc.). Supposedly, all the strings are unicode, but what I end up getting is any variation on the following theme (as returned by repr()): u'D\\xc3\\xa9cor' u'D\xc3\xa9cor' 'D\\xc3\\xa9cor' 'D\xc3\xa9cor' Is there a reliable way to take any four of the above strings and return the proper unicode string? u'D\xe9cor' # --> Décor The only way I can think of right now uses eval(), replace(), and a deep, burning shame that will never wash away.

    Read the article

  • How can you print a string using raw_unicode_escape encoding in python 3?

    - by Sorin Sbarnea
    The following code with fail in Python 3.x with TypeError: must be str, not bytes because now encode() returns bytes and print() expects only str. #!/usr/bin/python from __future__ import print_function str2 = "some unicode text" print(str2.encode('raw_unicode_escape')) How can you print a Unicode string escaped representation using print()? I'm looking for a solution that will work with Python 2.6 or newer, including 3.x

    Read the article

  • How web server choose between unicode and utf-8 for accentued characters?

    - by jacques
    I have a web server with my ISP which replaces in the urls the accentued characters by their unicode values: for instance é (eacute) is translated to %e9 (dec 233). For testing I use locally Easyphp which translate those characters by their utf-8 equivalence: é is then replaced by the well known sequence %c3%a9 (é)... Browsers served by Easyphp don't decode unicode values but they do if running locally (utf-8 and non converted accent also)... I have been unable to find where this behavior is configured in the server. This is a problem as some urls are built by my application using the php rawurlencode() which seems to always encode with unicode values on both servers. Any idea? Thanks in advance.

    Read the article

  • How does the web server choose between unicode and utf-8 for accented characters?

    - by jacques
    I have a web server with my ISP which replaces accented characters in URLs with their unicode values: for instance é (eacute) is translated to %e9 (dec 233). For testing locally I use EasyPHP which translates those characters by their utf-8 equivalence: é is then replaced by the well known sequence %c3%a9 (é)... Browsers served by EasyPHP don't decode unicode values but they do if running locally (utf-8 and non converted accent also)... I have been unable to find where this behavior is configured in the server. This is a problem as some urls are built by my application using the php rawurlencode() which seems to always encode with unicode values on both servers. Any idea?

    Read the article

  • how to mod rewrite unicode byte sequence for the multibyte hyphen character

    - by ChickenFur
    We have case where some adobe pdf files format the hyphen character as %E2%80%90. See http://forums.adobe.com/message/2807241 this is caused by the Calibri font I guess. So these pdf files have been released and the links don't work So I thought mod rewrite would come to the rescue. I followed this post here mod_ReWrite to remove part of a URL but I can't seem to search for the % characters according to this question. Is there anything else I can do? Here is the rewrite rule I want to use: RewriteRule ^foo%(.+)bar /foo-bar [L,R=301] I also tried this and it doesn't work RewriteRule ^foo%E2%80%90bar /foo-bar [L,R=301] Any Ideas?

    Read the article

  • How do I detect unicode characters in a Java string to resolve sax parser exception

    - by Madhumita
    Suppose I have a string that contains '¿'. How would I find all those unicode characters? Should I test for their code? How would I do that? I want to detect it to avoid sax parser exception which I am getting it while parsing the xml saved as a clob in oracle 10g database. Exception javax.servlet.ServletException: org.xml.sax.SAXParseException: Invalid byte 1 of 1-byte UTF-8 sequence.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >