Search Results

Search found 14723 results on 589 pages for 'video encoding'.

Page 131/589 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • ruby 1.9: invalid byte sequence in UTF-8

    - by Marc Seeger
    I'm writing a crawler in ruby (1.9) that consumes lots of HTML from a lot of random sites. When trying to extract links, I decided to just use .scan(/href="(.*?)"/i) instead of nokogiri/hpricot (major speedup). The problem is that I now receive a lot of "invalid byte sequence in UTF-8" errors. From what I understood, the net/http library doesn't have any encoding specific options and the stuff that comes in is basically not properly tagged. What would be the best way to actually work with that incoming data? I tried .encode with the replace and invalid options set, but no success so far...

    Read the article

  • Saving CSV in cocoa

    - by happyCoding25
    Hello, I need to make a cvs file in cocoa. To see how to set it up I created one in Numbers and opened it with text edit it looked like this: Results,,,,,,,,,,,, ,,,,,,,,,,,, A,10,,,,,,,,,,, B,10,,,,,,,,,,, C,10,,,,,,,,,,, D,10,,,,,,,,,,, E,10,,,,,,,,,,, So to replicate this in cocoa I used: NSString *CVSData = [NSString stringWithFormat:@"Results\n,,,,,,,,,,,,\nA,%@,,,,,,,,,,,\nB,%@,,,,,,,,,,,\nC,%@,,,,,,,,,,,\nD,%@,,,,,,,,,,,\nE,%@,,,,,,,,,,,",[dataA stringValue], [dataB stringValue], [dataC stringValue], [dataD stringValue], [dataE stringValue]]; Then [CVSData writeToFile:[savePanel filename] atomically:YES]; But when I try to open the saved file with Numbers I get the error “Untitled.cvs” could not be handled because Numbers cannot open files in the “Numbers Document” format. Could this be something with the way cocoa is encoding the file? Thanks for any help

    Read the article

  • Stack Overflow on Marshal.PtrToStructure reading wmv files

    - by Nick Udell
    Hi, I'm using a frame grabber class in order to capture and process each frame in a video. The class can be found here: http://www.codeproject.com/KB/graphics/FrameGrabber.aspx I'm having issues with running it, however. When loading the file, it attempts to marshal a video format pointer into a VideoInfoHeader (I'm using DirectShow.Net). The code that does this is as follows: videoInfo = (VideoInfoHeader)Marshal.PtrToStructure(mediaType.formatPtr, typeof(VideoInfoHeader)); When I run this it immediately crashes out of the debugging environment, probably with a stack overflow. When stepping through I can see that the formatPtr always equals 93, though I do not know what to make of this as I am fairly new to marshalling. I have checked that the video runs fine in Windows Media Player. This is essential in finding the dimensions of the video and also the size of the header, which needs to be skipped before the frames can be read. I am running Windows 7 x64. Any help on this would be much appreciated, I must've tried fifteen different frame grabbing techniques.

    Read the article

  • problem using base64 encoder and InputStreamReader

    - by karoberts
    I have some CLOB columns in a database that I need to put Base64 encoded binary files in. These files can be large, so I need to stream them, I can't read the whole thing in at once. I'm using org.apache.commons.codec.binary.Base64InputStream to do the encoding, and I'm running into a problem. My code is essentially this FileInputStream fis = new FileInputStream(file); Base64InputStream b64is = new Base64InputStream(fis, true, -1, null); InputStreamReader reader = new InputStreamReader(b64is); preparedStatement.setCharacterStream(1, reader); When I run the above code, I get one of these during the execution of the update java.io.IOException: Underlying input stream returned zero bytes, it is thrown deep in the InputStreamReader code. Why would this not work? It seems to me like the reader would attempt to read from the base 64 stream, which would read from the file stream, and everything should be happy.

    Read the article

  • Why Read In UTF-16LE File Won't Convert "\r\n" Into "\n" In Windows

    - by Dbger
    I am using Perl to read UTF-16LE files in Windows 7. If I read in an ascii file with following code: open CUR_FILE, "<", $asciiFile; Then each "\r\n" in file will be converted into a "\n" in memory; if I read in an UTF-16LE(windows 1200) file with following code: open CUR_FILE, "<:encoding(UTF-16LE)", $utf16leFile; Then "\r\n" will keep unchanged. This inconsistency cause problems when I trying to regexp lines with line breaks. My questions is: Is this how unicode works in Perl & Windows? Or Am I using the wrong code? Thanks so much!

    Read the article

  • Shadowbox.js and dailymotion videos

    - by Greenie
    Hi there. I have a site set up with some thumbs and links to videos hosted on vimeo. I show them in a overlay using shadowbox.js. This works perfect. Now I want to add a video hosted on dailymotion, but it doesn't work. The working link to vimeo video: <a rel="shadowbox;height=636;width=956" href="http://vimeo.com/moogaloop.swf?clip_id=11377863&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=00ADEF&fullscreen=1" class="player_text">thumbnail here</a> The non working link to dailymotion video: <a rel="shadowbox;height=636;width=956" href="http://www.dailymotion.com/swf/xd6g8y?related=0&autoplay=1" class="player_text">thumbnail here</a> When the href is pasted in a browser, both links work fine. Both are played in a swf as far as I can see. So I can't see why shadowbox won't show it. Unless its a permission problem on the dailymotion video. Anyone know what I'm doing wrong?

    Read the article

  • Handling over-long UTF-8 sequences

    - by Grant McLean
    I've just been reworking my Encoding::FixLatin Perl module to handle over-long utf8 byte sequences and convert them to the shortest normal form. My question is quite simply "is this a bad idea"? A number of sources (including this RFC) suggest that any over-long utf8 should be treated as an error and rejected. They caution against "naive implementations" and leave me with the impression that these things are inherently unsafe. Since the whole purpose of my module is to clean up messy data files with mixed encodings and convert them to nice clean utf8, this seems like just one more thing I can clean up so the application layer doesn't have to deal with it. My code does not concern itself with any semantic meaning the resulting characters might have, it simply converts them into a normalised form. Am I missing something. Is there a hidden danger I haven't considered?

    Read the article

  • Python unicode issues (2.6)

    - by ephemeralis
    I'm currently working on a irc bot for a multi-lingual channel, and I'm encountering some issues with unicode which are proving nearly impossible to solve. No matter what configuration of unicode encoding I seem to try, the list function which the below code sits within just flat out does nothing (c.notice is a class function which sends a NOTICE command to the irc server) or when it does do something, spits out something which obviously isn't encoded. The command should be sending ??, but instead it seems hellbent on sending å¤©å­ with a previous configuration of the same commands. The one I have specified below is of the 'send nothing' variety. I haven't worked with unicode before this, and thus I am quite stuck. I'm also positive that I'm doing this completely wrong as a consequence. (compileCMD just takes a list and spits out a single string of all the elements within the list) uk = self.compileCMD(self.faq.keys(),0) ukeys = unicode(uk,"utf-8").encode("utf-8") c.notice(nick, u"Current list of faq entries: %s" % (uk))

    Read the article

  • HTTP Data chunks over multiple packets?

    - by myforwik
    What is the correct way for a HTTP server to send data over multiple packets? For example I want to transfer a file, the first packet I send is: HTTP/1.1 200 OK Content-type: application/force-download Content-Type: application/download Content-Type: application/octet-stream Content-Description: File Transfer Content-disposition: attachment; filename=test.dat Content-Transfer-Encoding: chunked 400 <first 1024 bytes here> 400 <next 1024 bytes here> 400 <next 1024 bytes here> Now I need to make a new packet, if I just send: 400 <next 1024 bytes here> All the clients close there connections on me and the files are cut short. What headers do I put in a second packet to continue on with the data stream?

    Read the article

  • Efficient way to ASCII encode UTF-8

    - by Andreas Gohr
    I'm looking for a simple and efficient way to store UTF-8 strings in ASCII-7. With efficient I mean the following: all ASCII chars in the input should stay ASCII chars in the output the resulting string should be as short as possible the operation needs to be reversable without any data loss there should be no restriction on the input length the whole UTF-8 range should be allowed My first idea was to use Punycode (IDNA) as it fits the first three requirements, but it fails at the last two. Can anyone recommend an alternative encoding scheme? Even better if there's some code available to look at.

    Read the article

  • AutoKey - clipboard.get_selection() function fails on certain strings

    - by LonnieBest
    I've simplified my script so you can focus on the essence my problem. In AutoKey (not AutoHotKey), I made a Hot-Key (shift-alt-T) that performs this script on any string I have highlighted (like in gedit for example -- but any other gui editor too). strSelectedText = clipboard.get_selection() keyboard.send_keys(" " + strSelectedText) The script modifies the highlighted text and adds a space to the beginning of the string. It works for most strings I highlight, but not this one: * Copyright © 2008–2012 Lonnie Best. Licensed under the MIT License. It works for this string: * Add a Space 2.0.1 but not on this one: * Add a Space 2.0.1 – At the python command prompt, it has no problem any of those strings, yet the clipboard.get_selection() function seems to get corrupted by them. I'm rather new to python scripting, so I'm not sure if this is an AutoKey bug, or if I'm missing some knowledge I should know about encoding/preparing strings in python. Please help. I'm doing this on Ubuntu 12.04: sudo apt-get install autokey-qt

    Read the article

  • Load JSON in Python as header character set

    - by mridang
    Hi everyone, I've always found character sets and encodings complicated to understand and here I'm faced with another problem. My apologies for any inaccuracies. I'll do my best. I'm requesting data from a server which returns JSON. In the HTTP headers it also returns the character set like so: Content-Type: text/html; charset=UTF-8 I'm using the JSON library in Python to load the JSON using the json.loads method. When I pass it the returned JSON, it gives me a dictionary in Unicode. I've Googled around and I know that JSON should return Unicode as JavaScript strings are Unicode objects. How can I load the JSON as UTF-8? I would like to use the same encoding as specified in the response header. I've read this post but it didn't help. Thank you.

    Read the article

  • What is "=C2=A0" in MIME encoded, quoted-printable text?

    - by TheSoftwareJedi
    This is an example raw email I am trying to parse: MIME-version: 1.0 Content-type: text/html; charset=UTF-8 Content-transfer-encoding: quoted-printable X-Mailer: Verizon Webmail X-Originating-IP: [x.x.x.x] =C2=A0test testing testing 123 What is =C2=A0? I have tried a half dozen quoted-printable parsers, but none handle this correctly. Honestly, for now, I'm coding: //TODO WTF encoded = encoded.Replace("=C2=A0", ""); Because I can't figure out why that text is there randomly within the MIME content, and isn't supposed to be rendered into anything. By just removing it, I'm getting the desired effect - but WHY?!

    Read the article

  • Load JSON in Python as header chracterset

    - by mridang
    Hi everyone, I've always found character-sets and encodings complicated to understand and here I'm faced with another problem. My apologies for any inaccuracies. I'll do my best. I'm requesting data from a server which returns JSON. In the HTTP headers it also returns the character.set like so: Content-Type: text/html; charset=UTF-8 I'm using the JSON library in python to load the JSON using the json.loads method. When I pass it the returned JSON, it gives me a dictionary in Unicode. I've Googled around and I know that JSON should return Unicode as JavaScript strings are Unicode objects. How can I load the JSON as UTF-8. I would like to use the same encoding as specified in the response header. I've read this post but it didn't help. Thank you.

    Read the article

  • Stream/string/bytearray transformations in Python 3

    - by Craig McQueen
    Python 3 cleans up Python's handling of Unicode strings. I assume as part of this effort, the codecs in Python 3 have become more restrictive, according to the Python 3 documentation compared to the Python 2 documentation. For example, codecs that conceptually convert a bytestream to a different form of bytestream have been removed: base64_codec bz2_codec hex_codec And codecs that conceptually convert Unicode to a different form of Unicode have also been removed (in Python 2 it actually went between Unicode and bytestream, but conceptually it's really Unicode to Unicode I reckon): rot_13 My main question is, what is the "right way" in Python 3 to do what these removed codecs used to do? They're not codecs in the strict sense, but "transformations". But the interface and implementation would be very similar to codecs. I don't care about rot_13, but I'm interested to know what would be the "best way" to implement a transformation of line ending styles (Unix line endings vs Windows line endings) which should really be a Unicode-to-Unicode transformation done before encoding to byte stream, especially when UTF-16 is being used, as discussed this other SO question.

    Read the article

  • Should I convert overlong UTF-8 strings to their shortest normal form?

    - by Grant McLean
    I've just been reworking my Encoding::FixLatin Perl module to handle overlong UTF-8 byte sequences and convert them to the shortest normal form. My question is quite simply "is this a bad idea"? A number of sources (including this RFC) suggest that any over-long UTF-8 should be treated as an error and rejected. They caution against "naive implementations" and leave me with the impression that these things are inherently unsafe. Since the whole purpose of my module is to clean up messy data files with mixed encodings and convert them to nice clean utf8, this seems like just one more thing I can clean up so the application layer doesn't have to deal with it. My code does not concern itself with any semantic meaning the resulting characters might have, it simply converts them into a normalised form. Am I missing something. Is there a hidden danger I haven't considered?

    Read the article

  • Python BOM error in Ascii file

    - by Intosia
    I have a wierd annoying problem with Python 2.6 I trying to run this file (and the other), on my Embedded Linux ARM board. http://svn.tuxisalive.com/software_suite_v3/smart-core/smart-server/trunk/TDSService.py I get this error File "tuxhttpserver.py", line 1 SyntaxError: encoding problem: with BOM I know that error is about the BOM bytes etc etc. BUT, there are NO BOM bytes, its plain Ascii. I checked with a Hexeditor, and the linux File command says its Ascii. Im freaking out here... The code worked fine on my Sheevaplug (also a ARM based system).

    Read the article

  • Batch convert latin-1 files to utf-8 using iconv

    - by Jasmo
    I'm having this one PHP project on my OSX which is in latin1 -encoding. Now I need to convert files to UTF8. I'm not much a shell coder and I tried something I found from internet: mkdir new for a in ls -R *; do iconv -f iso-8859-1 -t utf-8 <"$a" new/"$a" ; done But that does not create the directory structure and it gives me heck load of errors when run. Can anyone come up with neat solution?

    Read the article

  • How to convert UTF-8 and Unicode to normal text ?

    - by Mehdi Amrollahi
    I have a downloader program that download pages from internet . the encoding of each page is different , some are in UTF-8 and some are Unicode. For example : &#97; that shows 'a' character ; pages full of this characters .We should convert this encodings to normal text . I used the UnicodeEncoding class in c# , but they do not help me . How can i decode this encodings to real characters? Is there a class or method that converting this ? Thanks .

    Read the article

  • I can't change HTTP request header Content-Type value using jQuery

    - by Matt
    Hi I tried to override HTTP request header content by using jQuery's AJAX function. It looks like this $.ajax({ type : "POST", url : url, data : data, contentType: "application/x-www-form-urlencoded;charset=big5", beforeSend: function(xhr) { xhr.setRequestHeader("Accept-Charset","big5"); xhr.setRequestHeader("Content-Type","application/x-www-form-urlencoded;charset=big5"); }, success: function(rs) { target.html(rs); } }); Content-Type header is default to "application/x-www-form-urlencoded; charset=UTF-8", but it obviously I can't override its value no matter I use 'contentType' or 'beforeSend' approaches. Could anyone adivse me a hint that how do I or can I change the HTTP request's content-type value? thanks a lot. btw, is there any good documentation that I can study JavaScript's XMLHttpRequest's encoding handling?

    Read the article

  • the characters except 0x00-0x7F are not been shown when converted to "UTF-8" from "ISO-8859-1"

    - by Mike.Huang
    I need to get a string from URL request of brower, and then create a text image by requested text. I know the default encoding of the Java net transmission is "ISO-8859-1", it can works normally with all characters what defined in "ISO-8859-1". But when I request a multi-byte Unicode character (e.g. chinese or something like ¤?), then I need to decode it by "UTF-8" from "ISO-8859-1". My codes like: String reslut = new String(requestString.getBytes("ISO-8859-1"), "UTF-8"); Everything is fine, but I found some characters in ISO-8859-1 are not been shown now, which characters are 0x80 - 0xFF(defined in" ISO-8859-1"), i.e. the characters except 0x00-0x7F are not been shown when converted to "UTF-8" from "ISO-8859-1" Any other method can solve this query?

    Read the article

  • Watermarking Flash Videos (server-side)

    - by Roberto Aloi
    Hi all, I have a bunch of flash videos that I need to watermark with user related information, to make illegal re-distribution of these files harder. I'm wondering how can this be done server-side. If done client-side, it will be quite easy for the user to intercept the videos before they are watermarked. Since the watermark should contain user-specific information I can't really watermark the videos before encoding them (unless I have an encoded video per user - not feasible). I'm expecting this to affect the streaming performances a lot, though. Any idea how this can be done (possibly in an efficient way)?

    Read the article

  • European signs in img src problem

    - by Rakoon
    Hey. I recently encountered a strange problem on my website. Images with æ ø and å in them (Western European signs) Won't display. The character encoding on all sites is "Iso-8859-1" I can print æ ø and å on the page without problems. If I right click the "broken image" and choose properties, it displays the filename with the european signs. (/admin/content/galleri/å.jpg) the code for img looks like this <img name='bilde' src='content/{$_SESSION["linkname"]}/{$row["img"]}' class='topmargin_ss leftmargin_ms rightmargin_s' width='80' height='80'> (Wasn't allowed to post images so the code is without starting and ending brackets) Made 4 files: z.jpg æ.jpg ø.jpg å.jpg Only z.jpg shows up, they are the exact same jpg. The images are uploaded using php code, which works, uploads to the right directory and has no problem with the european signs. Does anybody know what could be causing this?

    Read the article

  • best way to output a full precision double into a text file

    - by flevine100
    Hi, I need to use an existing text file to store some very precise values. When read back in, the numbers essentially need to be exactly equivalent to the ones that were originally written. Now, a normal person would use a binary file... for a number of reasons, that's not possible in this case. So... do any of you have a good way of encoding a double as a string of characters (aside from increasing the precision). My first thought was to cast the double to a char[] and write out the chars. I don't think that's going to work because some of the characters are not visible, produce sounds, and even terminate strings ('\0'... I'm talkin to you!) Thoughts?

    Read the article

  • Java application failing on special characters.

    - by Scottm
    An application I am working on reads information from files to populate a database. Some of the characters in the files are non-English, for example accented French characters. The application is working fine in Windows but on our Solaris machine it is failing to recognise the special characters and is throwing an exception. For example when it encounters the accented e in "Gérer" it says :- Encountered: "\u0161" (353), after : "\'G\u00c3\u00a9rer les mod\u00c3" (an exception which is thrown from our application) I suspect that in order to stop this from happening I need to change the file.encoding property of the JVM. I tried to do this via System.setProperty() but it has not stopped the error from occurring. Are there any suggestions for what I could do? I was thinking about setting the basic locale of the solaris platform in /etc/default/init to be UTF-8. Does anyone think this might help? Any thoughts are much appreciated.

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >