Search Results

Search found 5333 results on 214 pages for 'chunked encoding'.

Page 34/214 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Load JSON in Python as header chracterset

    - by mridang
    Hi everyone, I've always found character-sets and encodings complicated to understand and here I'm faced with another problem. My apologies for any inaccuracies. I'll do my best. I'm requesting data from a server which returns JSON. In the HTTP headers it also returns the character.set like so: Content-Type: text/html; charset=UTF-8 I'm using the JSON library in python to load the JSON using the json.loads method. When I pass it the returned JSON, it gives me a dictionary in Unicode. I've Googled around and I know that JSON should return Unicode as JavaScript strings are Unicode objects. How can I load the JSON as UTF-8. I would like to use the same encoding as specified in the response header. I've read this post but it didn't help. Thank you.

    Read the article

  • Stream/string/bytearray transformations in Python 3

    - by Craig McQueen
    Python 3 cleans up Python's handling of Unicode strings. I assume as part of this effort, the codecs in Python 3 have become more restrictive, according to the Python 3 documentation compared to the Python 2 documentation. For example, codecs that conceptually convert a bytestream to a different form of bytestream have been removed: base64_codec bz2_codec hex_codec And codecs that conceptually convert Unicode to a different form of Unicode have also been removed (in Python 2 it actually went between Unicode and bytestream, but conceptually it's really Unicode to Unicode I reckon): rot_13 My main question is, what is the "right way" in Python 3 to do what these removed codecs used to do? They're not codecs in the strict sense, but "transformations". But the interface and implementation would be very similar to codecs. I don't care about rot_13, but I'm interested to know what would be the "best way" to implement a transformation of line ending styles (Unix line endings vs Windows line endings) which should really be a Unicode-to-Unicode transformation done before encoding to byte stream, especially when UTF-16 is being used, as discussed this other SO question.

    Read the article

  • Should I convert overlong UTF-8 strings to their shortest normal form?

    - by Grant McLean
    I've just been reworking my Encoding::FixLatin Perl module to handle overlong UTF-8 byte sequences and convert them to the shortest normal form. My question is quite simply "is this a bad idea"? A number of sources (including this RFC) suggest that any over-long UTF-8 should be treated as an error and rejected. They caution against "naive implementations" and leave me with the impression that these things are inherently unsafe. Since the whole purpose of my module is to clean up messy data files with mixed encodings and convert them to nice clean utf8, this seems like just one more thing I can clean up so the application layer doesn't have to deal with it. My code does not concern itself with any semantic meaning the resulting characters might have, it simply converts them into a normalised form. Am I missing something. Is there a hidden danger I haven't considered?

    Read the article

  • Batch convert latin-1 files to utf-8 using iconv

    - by Jasmo
    I'm having this one PHP project on my OSX which is in latin1 -encoding. Now I need to convert files to UTF8. I'm not much a shell coder and I tried something I found from internet: mkdir new for a in ls -R *; do iconv -f iso-8859-1 -t utf-8 <"$a" new/"$a" ; done But that does not create the directory structure and it gives me heck load of errors when run. Can anyone come up with neat solution?

    Read the article

  • Python BOM error in Ascii file

    - by Intosia
    I have a wierd annoying problem with Python 2.6 I trying to run this file (and the other), on my Embedded Linux ARM board. http://svn.tuxisalive.com/software_suite_v3/smart-core/smart-server/trunk/TDSService.py I get this error File "tuxhttpserver.py", line 1 SyntaxError: encoding problem: with BOM I know that error is about the BOM bytes etc etc. BUT, there are NO BOM bytes, its plain Ascii. I checked with a Hexeditor, and the linux File command says its Ascii. Im freaking out here... The code worked fine on my Sheevaplug (also a ARM based system).

    Read the article

  • I can't change HTTP request header Content-Type value using jQuery

    - by Matt
    Hi I tried to override HTTP request header content by using jQuery's AJAX function. It looks like this $.ajax({ type : "POST", url : url, data : data, contentType: "application/x-www-form-urlencoded;charset=big5", beforeSend: function(xhr) { xhr.setRequestHeader("Accept-Charset","big5"); xhr.setRequestHeader("Content-Type","application/x-www-form-urlencoded;charset=big5"); }, success: function(rs) { target.html(rs); } }); Content-Type header is default to "application/x-www-form-urlencoded; charset=UTF-8", but it obviously I can't override its value no matter I use 'contentType' or 'beforeSend' approaches. Could anyone adivse me a hint that how do I or can I change the HTTP request's content-type value? thanks a lot. btw, is there any good documentation that I can study JavaScript's XMLHttpRequest's encoding handling?

    Read the article

  • How to convert UTF-8 and Unicode to normal text ?

    - by Mehdi Amrollahi
    I have a downloader program that download pages from internet . the encoding of each page is different , some are in UTF-8 and some are Unicode. For example : &#97; that shows 'a' character ; pages full of this characters .We should convert this encodings to normal text . I used the UnicodeEncoding class in c# , but they do not help me . How can i decode this encodings to real characters? Is there a class or method that converting this ? Thanks .

    Read the article

  • Silverlight Video Player that plays .MP4 & .FLV

    - by YeahStu
    I am currently using the Silverlight 2 Video Player to stream videos. I have been very pleased with it but it only seems to stream .WMV files. Does anyone know if there is a good Silverlight video player that will stream other types of video files, especially .MP4 & .FLV? I would be happy to use Silverlight 3 if necessary. EDIT: Because I like this player and have not found a great option, I am considering encoding files as I receive them so that they will always be streamed later as a .WMV. Unless I determine a good player (I am considering flash at this point), I will have to go down this road.

    Read the article

  • Watermarking Flash Videos (server-side)

    - by Roberto Aloi
    Hi all, I have a bunch of flash videos that I need to watermark with user related information, to make illegal re-distribution of these files harder. I'm wondering how can this be done server-side. If done client-side, it will be quite easy for the user to intercept the videos before they are watermarked. Since the watermark should contain user-specific information I can't really watermark the videos before encoding them (unless I have an encoded video per user - not feasible). I'm expecting this to affect the streaming performances a lot, though. Any idea how this can be done (possibly in an efficient way)?

    Read the article

  • the characters except 0x00-0x7F are not been shown when converted to "UTF-8" from "ISO-8859-1"

    - by Mike.Huang
    I need to get a string from URL request of brower, and then create a text image by requested text. I know the default encoding of the Java net transmission is "ISO-8859-1", it can works normally with all characters what defined in "ISO-8859-1". But when I request a multi-byte Unicode character (e.g. chinese or something like ¤?), then I need to decode it by "UTF-8" from "ISO-8859-1". My codes like: String reslut = new String(requestString.getBytes("ISO-8859-1"), "UTF-8"); Everything is fine, but I found some characters in ISO-8859-1 are not been shown now, which characters are 0x80 - 0xFF(defined in" ISO-8859-1"), i.e. the characters except 0x00-0x7F are not been shown when converted to "UTF-8" from "ISO-8859-1" Any other method can solve this query?

    Read the article

  • European signs in img src problem

    - by Rakoon
    Hey. I recently encountered a strange problem on my website. Images with æ ø and å in them (Western European signs) Won't display. The character encoding on all sites is "Iso-8859-1" I can print æ ø and å on the page without problems. If I right click the "broken image" and choose properties, it displays the filename with the european signs. (/admin/content/galleri/å.jpg) the code for img looks like this <img name='bilde' src='content/{$_SESSION["linkname"]}/{$row["img"]}' class='topmargin_ss leftmargin_ms rightmargin_s' width='80' height='80'> (Wasn't allowed to post images so the code is without starting and ending brackets) Made 4 files: z.jpg æ.jpg ø.jpg å.jpg Only z.jpg shows up, they are the exact same jpg. The images are uploaded using php code, which works, uploads to the right directory and has no problem with the european signs. Does anybody know what could be causing this?

    Read the article

  • best way to output a full precision double into a text file

    - by flevine100
    Hi, I need to use an existing text file to store some very precise values. When read back in, the numbers essentially need to be exactly equivalent to the ones that were originally written. Now, a normal person would use a binary file... for a number of reasons, that's not possible in this case. So... do any of you have a good way of encoding a double as a string of characters (aside from increasing the precision). My first thought was to cast the double to a char[] and write out the chars. I don't think that's going to work because some of the characters are not visible, produce sounds, and even terminate strings ('\0'... I'm talkin to you!) Thoughts?

    Read the article

  • Java application failing on special characters.

    - by Scottm
    An application I am working on reads information from files to populate a database. Some of the characters in the files are non-English, for example accented French characters. The application is working fine in Windows but on our Solaris machine it is failing to recognise the special characters and is throwing an exception. For example when it encounters the accented e in "Gérer" it says :- Encountered: "\u0161" (353), after : "\'G\u00c3\u00a9rer les mod\u00c3" (an exception which is thrown from our application) I suspect that in order to stop this from happening I need to change the file.encoding property of the JVM. I tried to do this via System.setProperty() but it has not stopped the error from occurring. Are there any suggestions for what I could do? I was thinking about setting the basic locale of the solaris platform in /etc/default/init to be UTF-8. Does anyone think this might help? Any thoughts are much appreciated.

    Read the article

  • What video codecs have most amount of content and thus popular at present/in future?

    - by goldenmean
    Hi, I want to find out if I can get some data on the percentage wise distribution of video content, for different video codecs currently used for video encoding. I know there are different applications/use-case scenarios which have different encoder used but i want to consdier all that and have a overall usage number(%) My guess is(highest to lowest % of content) - H.264(AVC) DivX MPEG2 VP6 Where do H.263, MPEG4, VC-1, RV, Theora, etc. fit in here. How may this look like in future? PS:I would like this to be community wiki to have get wider range of inputs, if someone with privileges can do it for me please. thank you. -AD

    Read the article

  • [Integrity] of a Http Post Request from Iphone to web server

    - by gotye
    Hey everyone, I am currently building a module that makes possible to comment a news and as you probably understood, I will need to insert this new comment in my web database. I know this stuff can be very fastidous so I would like to know if someone has a method which could assure the integrity of the request by checking some of the usual important stuff liek : trimming the string encoding it ? escaping it ? and so on ... If you have some tips to achieve a good insert, do not hesitate ;) Thank you for your time, Gotye.

    Read the article

  • How to search for a string including spaces in Objective C?

    - by AlexCu
    I have a real basic command-line program, in Objective-C, that searches for user inputed information. Unfourtunately, the code will only read the first word in series of words that the user enters. For example, if the user enters in "Apples are great", only "Apples" is kept (and hence searched later on), excluding the "are great" part of the sentence. Here's what I have so far: char enteredQuery [128]; // array 'name' to hold the scanf string NSString *searchQuery; // ending NSString to hold and compare the user inputed data NSLog(@"Enter search query:"); scanf("%s", enteredQuery); //will read the next line searchQuery = [NSString stringWithCString: enteredQuery encoding: NSASCIIStringEncoding]; //converts scanf data into a NSString type I know it's got to do with me using scanf or the character-encoder conversion, but I can't seem to figure it out. Any help in solving the problem is very appreciated! Thanks.

    Read the article

  • Visual Studio 2005 - strange characters rendered for ANSI text

    - by Apogee
    Hi all, Has anyone seen this odd text rendering issue in VS2005 before? The first line of using statements actually says "using System;". If I copy the line as it is displayed and paste into notepad, the text appears correctly, so clearly the character codes are correct. In addition, the solution compiles and runs correctly. I was thinking it might be due to ClearCase using a different character encoding as all the solutions we're using were freshly checked-out yesterday on to a new build machine, but this is only happening in 2 of our ~30 solutions. Incidentally the same .cs files when opened in VS2008 render correctly on this machine, could this be a corruption in VS2005?

    Read the article

  • Turkish characters are not displayed correctly

    - by tfeseas
    MySql database uses utf-8 encoding and data are stored correctly.I use set_name utf8 query to make sure the data called are utf-8 encoded.all variables from database works fine as long as the header charset is utf-8,but the static html characters do not work properly.When i set header charset to ISO-8859-9 variables are displayed differenly while html characters work ok.can anyone help me? <?php header('Content-Type: text/html; charset=ISO-8859-9'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head><title>noname</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />

    Read the article

  • Why Solr admin query page interprets UTF-8 as ISO-8859-1

    - by Scott Chu
    I deploy a war to my Tomcat 6.0.35 on Win7 64bit and when I use full-interface query page (I mean form.jsp) in Solr Admin to query 2 Chinese character (say it's C1C2) , the debug info shows: <lst name="debug"> <str name="rawquerystring">æ°è</str> <str name="querystring">æ°è</str> <str name="parsedquery">NEWSID:æ°è</str> <str name="parsedquery_toString">NEWSID:æ°è</str> ... You can see C1C2 becomes æ°è. I deploy same war file to Tomcat on Linux or on another Win7 64bit of my colleagues' computer, the encoding acts well. Does anyone know why and how can I avoid this problem? Thanks in advance!

    Read the article

  • How to get rid of "d»z" or "" characters

    - by Cassandra
    I have website based on Umbraco 5. I have installed contact form plugin (http://cultivjupitercontact.codeplex.com/). And on the web page at the end of this contact form there are always characters "d»z". It looks like that: ... <input type="submit" value="Send" /> </fieldset> <input name='uformpostroutevals' type='hidden' value='somevalue' /></form>d»z I suspect there is something wrong with encoding. I have tried to change it(to ANSI or UTF-8 without BOM but it didn't helped. Perhaps I have changed it in wrong file, cause I don't really know where exactly this 'd»z'is coming from. All I know it came with this plugin. On different server those extra characters are "". How can I get rid of those extra characters? Any help much appreciated!

    Read the article

  • Manipulating both unicode and ASCII character set in C#

    - by Murlex
    I have this mapping in my C# application string [,] unicode2Ascii = { { "&#3001;", "\x86" } }; ஹ - is the unicode value for a tamil literal "ஹ". This is the raw hex literal for the unicode value saved by MS Word as a byte sequence. I am trying to map these unicode value "strings" to a hex value under 255 (so as to accommodate non-unicode supported systems). I trying to use string.replace like this: S = S.replace(unicode2Ascii[0,0], unicode2Ascii[0,1]); However the resultant ouput has a ? instead of the actual hex 0x86 stored. Any pointer on how I could set the encoding for the second element of that array to something like windows-1252? Or is there a better way to do this conversion? thanks in advance

    Read the article

  • Should I convert overly-long UTF-8 strings to their shortest normal form?

    - by Grant McLean
    I've just been reworking my Encoding::FixLatin Perl module to handle overly-long UTF-8 byte sequences and convert them to the shortest normal form. My question is quite simply "is this a bad idea"? A number of sources (including this RFC) suggest that any over-long UTF-8 should be treated as an error and rejected. They caution against "naive implementations" and leave me with the impression that these things are inherently unsafe. Since the whole purpose of my module is to clean up messy data files with mixed encodings and convert them to nice clean utf8, this seems like just one more thing I can clean up so the application layer doesn't have to deal with it. My code does not concern itself with any semantic meaning the resulting characters might have, it simply converts them into a normalised form. Am I missing something. Is there a hidden danger I haven't considered?

    Read the article

  • Python "string_escape" vs "unicode_escape"

    - by Mike Boers
    According to the docs, the builtin string encoding string_escape: Produce[s] a string that is suitable as string literal in Python source code ...while the unicode_escape: Produce[s] a string that is suitable as Unicode literal in Python source code So, they should have roughly the same behaviour. BUT, they appear to treat single quotes differently: >>> print """before '" \0 after""".encode('string-escape') before \'" \x00 after >>> print """before '" \0 after""".encode('unicode-escape') before '" \x00 after The string_escape escapes the single quote while the Unicode one does not. Is it safe to assume that I can simply: >>> escaped = my_string.encode('unicode-escape').replace("'", "\\'") ...and get the expected behaviour?

    Read the article

  • Calling Msbuild from Php - Wrong Codepage and Culture

    - by miasbeck
    I have a Php script that calls Msbuild via System: <?php system( "msbuild umlaut.proj" ); ?> This is the project file: <?xml version="1.0" encoding="UTF-8"?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="EchoUmlaut" ToolsVersion="3.5"> <Target Name="EchoUmlaut"> <Message Text="Umlaute: Ä Ö Ü ä ö ü ß" /> </Target> </Project> When I call Msbuild directly from the command line the output of msbuild is in German (as excpected) and the umlauts come out OK (I chcp to 1252). But when I use php to call msbuild the umlauts are wrong, and the output of msbuild is changed to English. I wonder what I can do to prevent this. C:\>chcp Aktive Codepage: 1252. C:\>msbuild umlaut.proj Microsoft (R)-Buildmodul, Version 3.5.30729.1 [Microsoft .NET Framework, Version 2.0.50727.3607] Copyright (C) Microsoft Corporation 2007. Alle Rechte vorbehalten. Das Erstellen wurde am 13.04.2010 08:57:04 gestartet. Projekt "D:\Cvsroot\projekte\e4elaui\v1.0\umlaut.proj" auf Knoten 0 (Standardziele). Umlaute: Ä Ö Ü ä ö ü ß Die Erstellung von Projekt "D:\Cvsroot\projekte\e4elaui\v1.0\umlaut.proj" ist abgeschlossen (Standardziele). Das Erstellen war erfolgreich. 0 Warnung(en) 0 Fehler Vergangene Zeit 00:00:00 C:\>php call_from_php.php Microsoft (R) Build Engine Version 3.5.30729.1 [Microsoft .NET Framework, Version 2.0.50727.3607] Copyright (C) Microsoft Corporation 2007. All rights reserved. Build started 13.04.2010 08:57:11. Project "D:\Cvsroot\projekte\e4elaui\v1.0\umlaut.proj" on node 0 (default targets). Umlaute: Ž ™ š „ ” á Done Building Project "D:\Cvsroot\projekte\e4elaui\v1.0\umlaut.proj" (default targets). Build succeeded. 0 Warning(s) 0 Error(s) Time Elapsed 00:00:00

    Read the article

  • jQuery: AJAX umlauts & special characters are a mess

    - by rayne
    I've just created my first ajax function with jQuery which actually works, but unfortunately the character encoding (for characters like ä, ö, ü, ß, c, c, å, ø) is a nightmare. My files and my database are all UTF-8. I've tried a multitude of options in the ajax function and the PHP function, none of which were satisfactory. This is my ajax var dataString = { 'name': name, 'mail': mail // other stuff } $.ajax({ type: "POST", url: "/post.php", data: dataString, contentType: "application/x-www-form-urlencoded;charset=UTF-8", cache: false, success: function(html){ // do stuff } I've tried it without contentType: "application/x-www-form-urlencoded;charset=UTF-8" and I've tried to wrap the affected data in encodeURIComponent(), none of which worked. When I use that AJAX with htmlentities() in my php, my umlauts look like this in plain text: UE Ã?, AE Ã?, OE Ã?, ue ü, ae ä, oe o And like this in the database: UE Ãœ , AE Ä, OE Ö, ue ü, ae ä, oe o If I don't use htmlentities() but mysql_real_escape_string() instead (or neither), they look good in plain text, but they look like this in the database: AE Ä, OE Ö, UE Ãœ, ae ä oe ö ue ü I've been trying tons of options for hours now, but I can't find a solution that works. So far the only option I seem to have is having them look like a total mess in the database, but that would be very contraproductive if those data sets need to be edited.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >