Search Results

Search found 581 results on 24 pages for 'codec'.

Page 17/24 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • pysqlite2: ProgrammingError - You must not use 8-bit bytestrings

    - by rfkrocktk
    I'm currently persisting filenames in a sqlite database for my own purposes. Whenever I try to insert a file that has a special character (like é etc.), it throws the following error: pysqlite2.dbapi2.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. When I do "switch my application over to Unicode strings" by wrapping the value sent to pysqlite with the unicode method like: unicode(filename), it throws this error: UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 66: ordinal not in range(128) Is there something I can do to get rid of this? Modifying all of my files to conform isn't an option.

    Read the article

  • Audio Conversion C#

    - by Will
    What is the best way to convert various audio formats to PCM? For example: mp3, evrc, ogg vox. Is there a library out there that will allow me to implement this relatively easily? EDIT: I guess my initial question wasn't really what I needed. Most of the libs I have found are file converters. What I need is a block converter, where I pass in a 1Kb block of vox data and it returns its converted PCM block. Of course I’ll have to tell the converter what type of data it is and various pieces of codec information. The solution I am going for is to save and VOIP formats into a common wav format and to play that conformed file in real time. I thought there should be an easy way to do this because all audio is eventually turned into PCM before it is outputted anyways.

    Read the article

  • Serializing Python bytestrings to JSON, preserving ordinal character values

    - by Doctor J
    I have some binary data produced as base-256 bytestrings in Python (2.x). I need to read these into JavaScript, preserving the ordinal value of each byte (char) in the string. If you'll allow me to mix languages, I want to encode a string s in Python such that ord(s[i]) == s.charCodeAt(i) after I've read it back into JavaScript. The cleanest way to do this seems to be to serialize my Python strings to JSON. However, json.dump doesn't like my bytestrings, despite fiddling with the ensure_ascii and encoding parameters. Is there a way to encode bytestrings to Unicode strings that preserves ordinal character values? Otherwise I think I need to encode the characters above the ASCII range into JSON-style \u1234 escapes; but a codec like this does not seem to be among Python's codecs. Is there an easy way to serialize Python bytestrings to JSON, preserving char values, or do I need to write my own encoder?

    Read the article

  • How to read special characters from stdin in Python?

    - by erickrf
    I'm having trouble reading special characters from stdin. Here are my attempts: import os dir = raw_input("Dir name: ") Dir name: c:/á os.chdir(dir) WindowsError: [Error 2] The system cannot find the file specified: 'c:/\x81\xe1' Ok, so I tried to get the default system encoding and recode the string from stdin: import locale encoding = locale.getdefaultlocale()[1] print encoding cp1252 unicode(dir, encoding) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\Python26\lib\encodings\cp1252.py", line 15, in decode return codecs.charmap_decode(input,errors,decoding_table) UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 3: character maps to <undefined> Now, I don't know how to solve this. Nor can I understand - why is there a problem when I try to access a directory with a name written in the system default encoding itself??

    Read the article

  • Write data to an xml file.

    - by Bobby
    My aim is to write an XML file with few tags whose values are the regional language. I'm using Python to do this and using IDLE(Pythong GUI) for programming. While I try to write the words in an xmls file it gives the following error. UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-4: ordinal not in range(128) For now, I'm not using any xml writer library instead I'm opening a file "test.xml" and writing the data into it. This error is encountered by the line: f.write("\t\" + data + "\"). If I replace the above write statement with print statement then it prints the data properly on Python Shell. I'm reading the data from an excel file which is not in the UTF8,16, or 32 encoding formats. Its in some other format. cp1252 is reading the data properly. Any help in getting this data writtent oan xml would be highly appreciated. Thanks in advance. Bob

    Read the article

  • Reading UDP Packets

    - by Thomas Mathiesen
    I am having some trouble dissecting a UDP packet. I am receiving the packets and storing the data and sender-address in variables 'data' and 'addr' with: data,addr = UDPSock.recvfrom(buf) This parses the data as a string, that I am now unable to turn into bytes. I know the structure of the datagram packet which is a total of 28 bytes, and that the data I am trying to get out is in bytes 17:28. I have tried doing this: mybytes = data[16:19] print struct.unpack('>I', mybytes) --> struct.error: unpack str size does not match format And this: response = (0, 0, data[16], data[17], 6) bytes = array('B', response[:-1]) print struct.unpack('>I', bytes) --> TypeError: Type not compatible with array type And this: print "\nData byte 17:", str.encode(data[17]) --> UnicodeEncodeError: 'ascii' codec can't encode character u'\xff' in position 0: ordinal not in range(128) And I am not sure what to try next. I am completely new to sockets and byte-conversions in Python, so any advice would be helpful :) Thanks, Thomas

    Read the article

  • Lost in UTF-8 hell. (Django and Python)

    - by user140314
    I am working through the Django RSS reader project here. The RSS feed will read something like "OKLAHOMA CITY (AP) — James Harden let". The RSS feed's encoding reads encoding="UTF-8" so I believe I am passing utf-8 to markdown in the code snippet below. The em dash is where it chokes. I get the Django error of "'ascii' codec can't encode character u'\u2014' in position 109: ordinal not in range(128)" which is an UnicodeEncodeError. In the variables being passed I see "OKLAHOMA CITY (AP) \u2014 James Harden". The code line that is not working is: content = content.encode(parsed_feed.encoding, "xmlcharrefreplace") I am using markdown 2.0, django 1.1, and python 2.4. What is the magic sequence of encoding and decoding that I need to do to make this work? Thanks.

    Read the article

  • exceptions with python unicode encode/decode functions (why doesn't errors=ignore actually ignore th

    - by gatoatigrado
    Does anyone know why the string conversion functions throw exceptions when errors="ignore" is passed? How can I convert from regular Python string objects to unicode without errors being thrown? Thanks very much! python -c "import codecs; codecs.open('tmp', 'wb', encoding='utf8', errors='ignore').write('?????')" returns Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.6/codecs.py", line 686, in write return self.writer.write(data) File "/usr/lib/python2.6/codecs.py", line 351, in write data, consumed = self.encode(object, self.errors) UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 0: ordinal not in range(128)

    Read the article

  • problem using base64 encoder and InputStreamReader

    - by karoberts
    I have some CLOB columns in a database that I need to put Base64 encoded binary files in. These files can be large, so I need to stream them, I can't read the whole thing in at once. I'm using org.apache.commons.codec.binary.Base64InputStream to do the encoding, and I'm running into a problem. My code is essentially this FileInputStream fis = new FileInputStream(file); Base64InputStream b64is = new Base64InputStream(fis, true, -1, null); InputStreamReader reader = new InputStreamReader(b64is); preparedStatement.setCharacterStream(1, reader); When I run the above code, I get one of these during the execution of the update java.io.IOException: Underlying input stream returned zero bytes, it is thrown deep in the InputStreamReader code. Why would this not work? It seems to me like the reader would attempt to read from the base 64 stream, which would read from the file stream, and everything should be happy.

    Read the article

  • python read utf8 text file problem

    - by cpps
    I have a problem with python about reading and print utf8 text file. I have a test.txt in utf8 encoding without BOM. This file has two characters in it: ?? The first character "?" is Chinese and the second "?" is Japanese. Now, When I use Ulipad (a python editor) to run the following code to read the txt file, and print these two characters. import codecs infile = "C:\\test.txt" f = codecs.open(infile, "r", "utf-8") s = f.read() print(s) I got this error, "UnicodeEncodeError: 'cp950' codec can't encode character '\u58f0' in position 1: illegal multibyte sequence" I found it caused from the second character "?" . But when I use the same code to test in python default GUI IDLE, it works to print the two characters with no error. So, how can I fix the problem. My running environment is python 3.1 , windows xp traditional Chinese.

    Read the article

  • Python encoding for pipe.communicate

    - by Brian M. Hunt
    I'm calling pipe.communicate from Python's subprocess module from Python 2.6. I get the following error from this code: from subprocess import Popen pipe = Popen(cwd) pipe.communicate( data ) For an arbitrary cwd, and where data that contains unicode (specifically 0xE9): Exec. exception: 'ascii' codec can't encode character u'\xe9' in position 507: ordinal not in range(128) Traceback (most recent call last): ... stdout, stderr = pipe.communicate( data ) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 671, in communicate return self._communicate(input) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 1177, in _communicate bytes_written = os.write(self.stdin.fileno(), chunk) This is happening, I presume, because pipe.communicate() is expecting ASCII encoded string, but data is unicode. Is this the problem I'm encountering, and i sthere a way to pass unicode to pipe.communicate()? Thank you for reading! Brian

    Read the article

  • Simple but efficient way to store a series of small changes to an image?

    - by finnw
    I have a series of images. Each one is typically (but not always) similar to the previous one, with 3 or 4 small rectangular regions updated. I need to record these changes using a minimum of disk space. The source images are not compressed, but I would like the deltas to be compressed. I need to be able to recreate the images exactly as input (so a lossy video codec is not appropriate.) I am thinking of something along the lines of: Composite the new image with a negative of the old image Save the composited image in any common format that can compress using RLE (probably PNG.) Recreate the second image by compositing the previous image with the delta. Although the images have an alpha channel, I can ignore it for the purposes of this function. Is there an easy-to-implement algorithm or free Java library with this capability?

    Read the article

  • Python + PostgreSQL + strange ascii = UTF8 encoding error

    - by Claudiu
    I have ascii strings which contain the character "\x80" to represent the euro symbol: >>> print "\x80" € When inserting string data containing this character into my database, I get: psycopg2.DataError: invalid byte sequence for encoding "UTF8": 0x80 HINT: This error can also happen if the byte sequence does not match the encodi ng expected by the server, which is controlled by "client_encoding". I'm a unicode newbie. How can I convert my strings containing "\x80" to valid UTF-8 containing that same euro symbol? I've tried calling .encode and .decode on various strings, but run into errors: >>> "\x80".encode("utf-8") Traceback (most recent call last): File "<pyshell#14>", line 1, in <module> "\x80".encode("utf-8") UnicodeDecodeError: 'ascii' codec can't decode byte 0x80 in position 0: ordinal not in range(128)

    Read the article

  • gae error when i login.

    - by zjm1126
    i am using http://code.google.com/p/gaema/source/browse/#hg/demos/webapp, and this is my traceback: Traceback (most recent call last): File "D:\Program Files\Google\google_appengine\google\appengine\ext\webapp\__init__.py", line 510, in __call__ handler.get(*groups) File "D:\gaema\demos\webapp\main.py", line 31, in get google_auth.get_authenticated_user(self._on_auth) File "D:\gaema\demos\webapp\gaema\auth.py", line 641, in get_authenticated_user OpenIdMixin.get_authenticated_user(self, callback) File "D:\gaema\demos\webapp\gaema\auth.py", line 83, in get_authenticated_user url = self._OPENID_ENDPOINT + "?" + urllib.urlencode(args) File "D:\Python25\lib\urllib.py", line 1250, in urlencode v = quote_plus(str(v)) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1: ordinal not in range(128) how to do this thanks ??????????????? ???????? ??????????? ?????????????? ???????

    Read the article

  • python unichr problem

    - by jacob
    I've got some problem with unichr() on my server. Please see below: On my server (Ubuntu 9.04): >>> print unichr(255) Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeEncodeError: 'ascii' codec can't encode character u'\xff' in position 0: ordinal not in range(128) On my desktop (Ubuntu 9.10): >>> print unichr(255) ÿ I'm fairly new to python so I don't know how to solve this. Anyone care to help? Thanks.

    Read the article

  • java.io in debian

    - by Stig
    Hello, i try to compile a java program but in the import section of the code fails: import java.net.; import java.io.; import java.util.; import java.text.; import java.awt.; //import java.awt.image.; import java.awt.event.; //import java.awt.image.renderable.; import javax.swing.; import javax.swing.border.; //import javax.swing.border.EtchedBorder; //import javax.media.jai.; //import javax.media.jai.operator.; //import com.sun.media.jai.codec.; //import java.lang.reflect.; how can i fix the problem in a linux debian machine?. Thanks

    Read the article

  • How to preserve data integrity while minimizing the transmission size

    - by user1500578
    we have sensors in the wild that send their data to a server every day via TCP/IP, either through 3G or through satellite for the physical layer. The sensors can automatically switch from one to the other depending on their location and the quality of the signal with the local 3G operator. Given that the 3G and satellite communications are very expensive, we want to minimize the amount of data to send. But also, we want to protect ourselves from lost data. What would be the best strategy to ensure with reasonable certainty that the integrity of our data is preserved, while minimizing the amount of redundancy, i.e the amount of data transmitted ? I've read about the zfec codec, but I'm not sure if we need to transmit all the chunks, or if we need to send a hash code along each chunk.

    Read the article

  • Google app engine error when I login.

    - by zjm1126
    i am using http://code.google.com/p/gaema/source/browse/#hg/demos/webapp, and this is my traceback: Traceback (most recent call last): File "D:\Program Files\Google\google_appengine\google\appengine\ext\webapp\__init__.py", line 510, in __call__ handler.get(*groups) File "D:\gaema\demos\webapp\main.py", line 31, in get google_auth.get_authenticated_user(self._on_auth) File "D:\gaema\demos\webapp\gaema\auth.py", line 641, in get_authenticated_user OpenIdMixin.get_authenticated_user(self, callback) File "D:\gaema\demos\webapp\gaema\auth.py", line 83, in get_authenticated_user url = self._OPENID_ENDPOINT + "?" + urllib.urlencode(args) File "D:\Python25\lib\urllib.py", line 1250, in urlencode v = quote_plus(str(v)) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1: ordinal not in range(128) how to do this thanks updated i change the code from args = dict((k, v[-1]) for k, v in self.request.arguments.iteritems()) args["openid.mode"] = u"check_authentication" url = self._OPENID_ENDPOINT + "?" + urllib.urlencode(args) to args = dict((k, v[-1].encode('utf-8')) for k, v in self.request.arguments.iteritems()) args["openid.mode"] = u"check_authentication" url = self._OPENID_ENDPOINT + "?" + urllib.urlencode(args) but also error.

    Read the article

  • How do I get Cabal to bypass my Windows proxy settings?

    - by Brent.Longborough
    When retrieving packages with Cabal, I frequently get errors with this message: user error (Codec.Compression.Zlib: premature end of compressed stream) It looks like Cabal is using my Windows Networking proxy settings (for Privoxy). From digging around Google, Cabal or its libraries appear to have (had) a problem in this area. Possible solutions I can see are: Turn off proxying while using Cabal (not very keen on this one); or Get a patch and start hacking. I'm hesitant to go down this path, as I'm a complete Haskell noob and I'm not yet comfortable with Darcs; or Give it the magic "can I haz no proxy" parameter. Hence the question.

    Read the article

  • Python and hebrew encoding/decoding error

    - by user340495
    Hey, I have sqlite database which I would like to insert values in Hebrew to I am keep getting the following error : UnicodeDecodeError: 'ascii' codec can't decode byte 0xd7 in position 0: ordinal not in range(128) my code is as following : runsql(u'INSERT into personal values(%(ID)d,%(name)s)' % {'ID':1,'name':fabricate_hebrew_name()}) def fabricate_hebrew_name(): hebrew_names = [u'????',u'???',u'???',u'???',u'????',u'???',u'????',u'???',u'????',u'?????',u'????',u'???',u'????'] return random.sample(names,1)[0].encode('utf-8') note: runsql executing the query on the sqlite database fabricate_hebrew_name() should return a string which could be used in my SQL query. any help is much appreciated.

    Read the article

  • HTML Audio performance

    - by user1888309
    I'm working on HTML drum machine, and I`ve met some performance issues, rhythm start to break if BPM is higher than 110 but I'm expecting to make it work on BPM over 180. I guess that it can be related with format or codec of audio files, however it also maybe that my code is not very optimised (as I can see from JS CPU profiling it's not). So I'm expecting you guys give me some code review or some hints on optimisation. Although all similar projects I've found on internet didn't work good and maybe it's just restrictions of Audio API. By the way, it's very raw and sounds works only on Chrome under Mac OS, so any advise on audio encoding for web also would be great Project on Github pages Screenshot of Groove which breaks UPDATE Ok, I've found that I was encoding audio files incorrectly, after fixing that rhythm stopped breaking, and also it started working in Mozilla. But still there are issues on windows OS.

    Read the article

  • Using finch first time. How to play mp3,ogg or other formats (wav files to big) ?

    - by Allisone
    My *.wav's work as expected. But wav files are to big, so I want to play *.mp3 or *.ogg but it doesn't work. I use this lines of code found in the finch Demo project engine = [[Finch alloc] init]; sitar = [[Sound alloc] initWithFile:RSRC(@"sitar.wav")]; [sitar play]; So I only change sitar.wav into my .mp3 filename. Note 1: It mustn't be mp3 or ogg, any file format not as huge as wav should be ok, but which ? Note 2: I didn't know how to use sound, so I searched and found finch here at stackoverflow. It looks easy, so I would like to use that, but if you know some other easy way to play that sound files (ambient + effects sound with compressed codec) I would also switch to that other technique.

    Read the article

  • Encoding with url and api

    - by user2950824
    So I have this web app set up and running and it works fine for any username that you request, but when i try http://mrcastelo.pythonanywhere.com/lol/euw/Nazaré, it simply doesnt work - the error that I get on the server is the following: iddata= getJSON(urllolbase+region+urlid+username) #SummonerID UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 5: ordinal not in range(128) It is annoying me greatly, I've tried some other threads but none of them came to a fix. The api that I am using (www.legendaryapi.com) does accept this because this works. Any idea on how to fix this?

    Read the article

  • how to show the right word in my code, my code is : os.urandom(64)

    - by zjm1126
    My code is: print os.urandom(64) which outputs: > "D:\Python25\pythonw.exe" "D:\zjm_code\a.py" \xd0\xc8=<\xdbD' \xdf\xf0\xb3>\xfc\xf2\x99\x93 =S\xb2\xcd'\xdbD\x8d\xd0\\xbc{&YkD[\xdd\x8b\xbd\x82\x9e\xad\xd5\x90\x90\xdcD9\xbf9.\xeb\x9b>\xef#n\x84 which isn't readable, so I tried this: print os.urandom(64).decode("utf-8") but then I get: > "D:\Python25\pythonw.exe" "D:\zjm_code\a.py" Traceback (most recent call last): File "D:\zjm_code\a.py", line 17, in <module> print os.urandom(64).decode("utf-8") File "D:\Python25\lib\encodings\utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode bytes in position 0-3: invalid data What should I do to get human-readable output?

    Read the article

  • Python: why does str() on some text from a UTF-8 file give a UnicodeDecodeError?

    - by AP257
    I'm processing a UTF-8 file in Python, and have used simplejson to load it into a dictionary. However, I'm getting a UnicodeDecodeError when I try to turn one of the dictionary values into a string: f = open('my_json.json', 'r') master_dictionary = json.load(f) #some json wrangling, then it fails on this line... mysql_string += " ('" + str(v_dict['code']) Traceback (most recent call last): File "my_file.py", line 25, in <module> str(v_dict['code']) + "'), " UnicodeEncodeError: 'ascii' codec can't encode character u'\xf4' in position 35: ordinal not in range(128) Why is Python even using ASCII? I thought it used UTF-8 by default, and this is a UTF-8 file. What is the problem?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >