Search Results

Search found 28425 results on 1137 pages for 'source encoding'.

Page 67/1137 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • NSString to NSData Failing in Encoding

    - by Travis
    I'm trying to use NSXmlParser to parse ISO-8859-1 data. Using Apple's own example for parsing ISO-8859-1, I have the following. NSString *xmlFilePath = [[NSBundle mainBundle] pathForResource:sampleFileName ofType:@"xml"]; NSString *xmlFileContents = [NSString stringWithContentsOfFile:xmlFilePath encoding:NSISOLatin1StringEncoding error:nil]; NSLog(@"contents: %@", xmlFileContents); I see that in the console, the contents of the string is accurate. However when I try to convert it to an NSData object (for use with the parser), I do the following. NSData *xmlData = [xmlFileContents dataUsingEncoding:NSISOLatin1StringEncoding]; But then when my didStartElement delegate gets called, I see  showing up which I think is from an encoding discrepancy. Can NSXmlParser handle ISO-8859-1 and if so, what am I doing wrong?

    Read the article

  • Unable to encode to iso-8859-1 encoding for some chars using Perl Encode module

    - by ppant
    I have a HTML string in ISO-8859-1 encoding. I need to pass this string to HTML:Entities::decode_entities() for converting some of the HTML ASCII codes to respective chars. To so i am using a module HTML::Parser::Entities 3.65 but after decode_entities() operation my whole string changes to utf-8 string. This behavior seems fine as the documentation of the HTML::Parse. As i need this string back in ISO-8859-1 format for further processing so i have used Encode::encode("iso-8859-1",$str) to change the string back to ISO-8859-1 encoding. My results are fine excepts for some chars, a question mark is coming instead. One example is single quote ' ASCII code (’) Can anybody help me if there any limitation of Encode module? Any other pointer will also be helpful to solve the problem. Thanks

    Read the article

  • Can not set Character Encoding using sun-web.xml

    - by stck777
    I am trying to send special characters like spanish chars from my page to a JSP page as a form parameter. When I try get the parameter which I sent, it shows that as "?" (Question mark). After searching on java.net thread I came to know that I should have following entry in my sun-web.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE sun-web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Sun ONE Application Server 8.0 Servlet 2.4//EN" "http://www.sun.com/software/sunone/appserver/dtds/sun-web-app_2_4-0.dtd"> <sun-web-app> <locale-charset-info default-locale="es"> <locale-charset-map locale="es" charset="UTF-8"/> <parameter-encoding default-charset="UTF-8"/> </locale-charset-info> </sun-web-app> But it did not work with this approach still the character goes as "?".

    Read the article

  • invalid token error while parsing an XML file with UTF-8 encoding

    - by Niranjan
    invalid token error while parsing an XML file with UTF-8 encoding. This error is coming when it encountered extended ASCII character 'â' { "â", "â" }. When I have changed the encoding from UTF-8 to ISO-8859-1 the parsing is successful. But my application should support UTF-8, ASCII and extended ASCII characters. What should I do for this? Any ideas are welcome. Thanks in Advance for your time and solution.

    Read the article

  • Tomcat Compression Does Not Add a Content-Encoding: gzip in the Header

    - by Julien Chastang
    I am using Tomcat to compress my HTML content like this: <Connector port="8080" maxHttpHeaderSize="8192" maxProcessors="150" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="150" connectionTimeout="20000" disableUploadTimeout="true" compression="on" compressionMinSize="128" noCompressionUserAgents="gozilla, traviata" compressableMimeType="text/html" URIEncoding="UTF-8" /> In the HTTP header (as observed via YSlow), however, I am not seeing Content-Encoding: gzip resulting in a poor YSlow score. All I see is HeadersPost Response Headers Server: Apache-Coyote/1.1 Content-Type: text/html;charset=ISO-8859-1 Content-Language: en-US Content-Length: 5251 Date: Sat, 14 Feb 2009 23:33:51 GMT I am running an apache mod_jk Tomcat configuration. How do I compress HTML content with Tomcat, and also have it add "Content-Encoding: gzip" in the header?

    Read the article

  • Intra-Unicode "lean" Encoding Converters

    - by Mystagogue
    Windows provides encoding conversion functions ("MultiByteToWideChar" and "WideCharToMultiByte") which are capable of UTF-8 to/from UTF-16 conversions, among other things. But I've seen people offer home-grown 30 to 40 line functions that claim also to perform UTF-8 / UTF-16 encoding conversions. My question is, how reliable are such tiny converters? Can such a tiny amount of code handle problems such as converting a UTF-16 surrogate pair (such as ) into a UTF-8 single four byte sequence (rather than making the mistake of converting into a pair of three byte sequences)? Can they correctly spot "unpaired" surrogate input, and provide an error? In short, are such tiny converters mere toys, or can they be taken seriously? For that matter, why does unicode.org seemingly offer no advice on an algorithm for accomplishing such conversions?

    Read the article

  • How to include NetBeans Platform Source code into module dependencies

    - by Ben Hammond
    I am debugging a NetBeans Platform application. I have downloaded the NetBeans Platform source code .zip file. I would really really like to attach the source code to the debugger so that I can seamlessly jump to internal NB source code. Normally I would edit the Library configuration to tell NB where to find the source code, and it would just work. However this is not possible for NetBeans Modules; when I look at the Utilities API module dependency it does not look like a normal NB library and there is nowhere to add in the source code. I suspect that if I were to rebuild my project using Maven this would work automatically. But thats a terrible reason to switch to maven. How can I get the NB Platform Source code included into my Module Dependencies ?

    Read the article

  • Syncronize an SVN repo (svnsync) with encoding errors

    - by Hamish
    Is it possible to fix/bypass non-UTF8 encoded svn:log records when syncronizing repositories with svnsync? Background I'm in the process of taking over the maintenance of an open source module that is stored within a large (well over 10,000 revisions) subversion (1.5.5) repository. I do not have admin access to the remote repository to dump/filter/load the module. The old repository is being discontinued and I am trying to sync the original sub module to my local (1.6+) repository with svnsync. For example: svnsync file://home/svn/temp-repo/ http://path.to.repo/modulename/ The problem is that the old repository didn't enforce UTF8 encoding and I'm hitting errors like: svnsync: Cannot accept 'svn:log' property because it is not encoded in UTF-8 I can't modify the log property in the source repository so I need to somehow modify or ignore the property value when the encoding is unknown/invalid. Any ideas? For example, is it possible to write a pre-revprop-change script to modify the log property in transit?

    Read the article

  • import existing source code as referenced library in eclipse

    - by user555174
    I have some source codes from a friend that I would like to use as referenced library in my BlackBerry project. I'm not sure about how to package the source codes into a .jar file. I tried exporting the source to a JAR file and import it as external JAR in my project, it's giving me missing stack map error. I tried to preverify the .jar file generated from the source using the provided preverification tool from BlackBerry JDE, it didn't give me any output folder. In fact, I'm not sure if the way I export the source is correct. Can anyone provide step-by-step instructions on how to package existing source code into a valid JAR file that can be imported into my project as a referenced library? Again, I'm using eclipse. Thanks very much in advance.

    Read the article

  • Php/ODBC encoding problem

    - by JohnM2
    I use ODBC to connect to SQL Server from PHP. In PHP I read some string (nvarchar column) data from SQL Server and then want to insert it to mysql database. When I try to insert such value to mysql database table I get this mysql error: Incorrect string value: '\xB3\xB9ow...' for column 'name' at row 1 For string with all ASCII characters everything is fine, the problem occurs when non-ASCII characters (from some European languages) exist. So, in more general terms: there is a Unicode string in MS SQL Server database, which is retrieved by PHP trough ODBC. Then it is put in sql insert query (as value for utf-8 varchar column) which is executed for mysql database. Can someone explain to me what is happening in this situation in terms of encoding? At which step what character encoding convertions may take place? I use: PHP 5.2.5, MySQL5.0.45-community-nt, MS Sql Server 2005.

    Read the article

  • Maven - add dependency on artifact source

    - by Jacob Hansson
    I have two maven modules, one that ends up as a jar, and one war that depends on that jar. I want the jar module to package it's source code together with the compiled classes in the jar, so that the second module is able to access it. I have tried using the maven-source-plugin, but I am confused as to how to add a dependency on the output of that. It seems that the dependency by default goes to the compiled jar, and not the source-code jar (ending with "-source.jar") that maven-source-plugin creates. How do I add the "-source.jar" as a dependency, while still preserving the dependency on the compiled sources?

    Read the article

  • Video encoding by servlet with MEncoder

    - by Andrey
    Hello. I was developing an application for video encoding on the server and got a problem with encoding video with MEncoder. This decoder doesn't work correctly when runned by a command line with Runtime.getRuntime().exec(“D:\mencoder\mnc\mencoder.exe video1.avi -o outvideo1.flv -of lavf -oac mp3lame -lameopts abr:br=64 -srate 22050 -ovc lavc -lavcopts vcodec=flv:vbitrate=300:mbd=2:mv0:trell:v4mv:cbp:last_pred=3 -vf scale=320:240,harddup -quiet”) ; The decoder launches and works in windows console with my parameters, but when it's run from a servlet it just hangs in process list and doesn't do anything before the web-server is stopped. When trying to use decoder from a simple java applcation, it runs correctly. Thanks for help.

    Read the article

  • Loading xml with encoding UTF 16 using XDocument

    - by Sangram
    Hi, I am trying to read the xml document using XDocument method . but i am getting an error when xml has <?xml version="1.0" encoding="utf-16"?> When i removed encoding manually.It works perfectly. I am getting error " There is no Unicode byte order mark. Cannot switch to Unicode. " i tried searching and i landed up here-- Why does C# XmlDocument.LoadXml(string) fail when an XML header is included? But could not solve my problem. My code : XDocument xdoc = XDocument.Load(path); Any suggestions ?? thank you.

    Read the article

  • FFmpeg + iPhone - Interesting (incorrect?) video encoding results

    - by jtrim
    I'm encoding some video on the iPhone by running the png image data through swscale to get YUV420P data then encoding that frame using the MSMPEG4V1 codec. In the api docs, avcodec_encode_video should return the number of bytes used from the output buffer by that encode operation. There are 234,000 bytes going into the encoder, but the result returned by avcodec_encode_video is simply "4". The result is exactly the same over 24 frames. Something seems fishy here...any insight? Here's a pastebin link to the code: http://pastebin.com/ht94FWva (sorry for the link away from SO, I just didn't want to have the code duplicated in several places) EDIT: Also, I've set up a custom log callback for ffmpeg to use and I have the log level set to "Verbose" (libavutil/log.h), so libavcodec should be logging any goofs to the console, but avcodec is quiet throught he whole operation. (note: I did test to make sure my log callback was working)

    Read the article

  • In DirectShow, what determines the graph source?

    - by Seva Alekseyev
    Hi all, I have two machines - A (XP SP2) and B (Win7). Machine B has trouble playing OGM files - enabling subtitles causes a crash in the player. Investigation shows that the DirectShow graphs are quite different. On A, the source is a file source, which produces a stream of subtype OGG, which goes into "Ogg Splitter". On B, the source is an instance of Haali Media Splitter, which produces video, audio, and subtitles as separate streams. Machine A has Haali splitter installed as well, but it is not invoked somehow. Question - what determines the source filter? Is there a file type to preferred source mapping, or does the system load and ask all suitable filters if they would take this file? On machine A, the merit of Haali splitter is higher than that of File source, so it's probably not about relative merits.

    Read the article

  • If a command line program is unsure of stdout's encoding, what encoding should it output?

    - by mackstann
    I have a command line program written in Python, and when I pipe it through another program on the command line, sys.stdout.encoding is None. This makes sense, I suppose -- the output could be another program, or a file you're redirecting it into, or whatever, and it doesn't know what encoding is desired. But neither do I! This program will be used by many different people (humor me) in different ways. Should I play it safe and output only ascii (replacing non-ascii chars with question marks)? Or should I output UTF-8, since it's so widespread these days?

    Read the article

  • Access Denied Java FileWriter / FileInputStream

    - by Matt
    My program downloads a websites source code, modifies it, creates the file, and then reuploads it through the FTP. However, I receive the following error when trying to open the created file: java.io.FileNotFoundException: misc.html (Access is denied) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.<init>(Unknown Source) at Manipulator.uploadSource(Manipulator.java:63) at Start.addPicture(Start.java:130) at Start$2.actionPerformed(Start.java:83) at javax.swing.AbstractButton.fireActionPerformed(Unknown Source) at javax.swing.AbstractButton$Handler.actionPerformed(Unknown Source) at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source) at javax.swing.DefaultButtonModel.setPressed(Unknown Source) at javax.swing.plaf.basic.BasicButtonListener.mouseReleased(Unknown Source) at java.awt.Component.processMouseEvent(Unknown Source) at javax.swing.JComponent.processMouseEvent(Unknown Source) at java.awt.Component.processEvent(Unknown Source) at java.awt.Container.processEvent(Unknown Source) at java.awt.Component.dispatchEventImpl(Unknown Source) at java.awt.Container.dispatchEventImpl(Unknown Source) at java.awt.Component.dispatchEvent(Unknown Source) at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source) at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source) at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source) at java.awt.Container.dispatchEventImpl(Unknown Source) at java.awt.Window.dispatchEventImpl(Unknown Source) at java.awt.Component.dispatchEvent(Unknown Source) at java.awt.EventQueue.dispatchEvent(Unknown Source) at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source) at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source) at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source) at java.awt.EventDispatchThread.pumpEvents(Unknown Source) at java.awt.EventDispatchThread.pumpEvents(Unknown Source) at java.awt.EventDispatchThread.run(Unknown Source) When I navigate to the folder directory and attempt to open "misc.html" with Notepad I receive Access is Denied. My code is fairly simple: File f = new File(page.sourceFileName); try { FileWriter out = new FileWriter(f); out.write(page.source); out.close(); } catch (IOException e) { e.printStackTrace(); } InputStream input = new FileInputStream(f); This is the vital excerpt from my program. I have copied this into a different test program and it works fine, I create a misc.html file and reopen it with both FileInputStream and manually. I would be worried about Administrator rights but the Test program works fine when I run it RIGHT after the problem program. I also have checked if the file exists and is a file with File methods and it is as well. Is this a result of me not closing a previous Input/Output properly? I've tried to check everything and I am fairly positive I close all streams as soon as they finish... Help! :)

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • Tomcat gzip while chunked issue

    - by hoodoos
    I'm expiriencing some problem with one of my data source services. As it says in HTTP response headers it's running on Apache-Coyote/1.1. Server gives responses with Transfer-Encoding: chunked, here sample response: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Content-Encoding: gzip Date: Tue, 30 Mar 2010 06:13:52 GMT And problem is when I'm requesting server to send gzipped request it often sends not full response. I recieve response, see that last chunk recieved, but then after ungzipping I see that response is partial. I never seen such behavior with gzip turned off in request headers. So my question is: is it common tomcat issue? maybe one of it's mod which is doing compression? Or maybe it maybe some kind of proxy issue? I can't tell about versions of tomcat or what gzip mod they use, but feel free to ask, i'll try ask my service provider. Thanks.

    Read the article

  • Preserving multi-byte characters in Flex XML object

    - by Dan Petker
    I'm having an issue with the Flex XML object type mangling multi-byte characters (such as Japanese or Chinese characters). The basic setup is this. I'm getting an XML-formatted string from the server, and in that string there can be multi-byte characters. A lot of the time, these characters are in attributes, for example: <example id="foo" name="[some multi-byte characters]"/> Now, when I examine the raw string, the multi-byte characters display just fine. However, as soon as I convert the string to an XML object using the top-level XML() function, all the multi-byte characters become mangled. I've tried setting the XML's encoding by including an <?xml version="1.0" encoding="utf-8"?> element in the XML-formatted string, but this doesn't seem to have any effect on the resulting XML object. Is there a way to get the XML object to respect the encoding of the XML-formatted string and prevent the multi-byte characters from being mangled?

    Read the article

  • UTF-8 xml file shows Gibberish

    - by Adam
    I have a UTF-8 encoded xml file, which was exported from a Wordpress MySQL database. While the file is saved as UTF-8, and the encoding is UTF-8, I get gibberish instead of the Hebrew text that is supposed to be in there, which looks like this: ™×•×˜×•×ª How can I find the original encoding or charset and convert the text into proper Hebrew? PHP's mb_detect_encoding($str); returns UTF-8 Tried all sorts of php encoding functions, with different settings and input/output charsets, but they all just print different looking gibberish blocks, like: ÃâÃËÃâ¢Ãâ¢ÃËà and ?? ××©×ž× ...Any Ideas how to go about this?

    Read the article

  • PHP File Serving Script: Unreliable Downloads?

    - by JGB146
    This post started as a question on ServerFault ( http://serverfault.com/questions/131156/user-receiving-partial-downloads ) but I determined that our php script was the culprit. So I'm issuing an updated question here about what I believe is the actual issue. I am using a php script to verify permissions and then serve up a file for users of my website to download. Most of the time, this works, but recently one user has been seeing problems with larger downloads. He is only getting ~80% of downloads for files that are 100MB in size. Also, all downloads from this script fail to report a filesize. Further, tests revealed that the same user COULD reliably download each of the failed files if given a direct link (at which point the filesize is reported). Here's the relevant snippet of code that we are using to serve the file: header("Content-type:$contenttype"); $len = filesize($filename); header("Content-Length: $len"); header("Content-Disposition: attachment; filename=".$title.".".$ext); readfile($filename); Note that $contenttype, $filename, $title, and $ext are all set correctly before we get here. These have been triple-checked. None of them are the problem. Also, $len does provide the correct filesize. While researching this issue, I came across this post: http://stackoverflow.com/questions/1334471/content-length-header-always-zero It seems that I am encountering the same issue. When I use the script, I get chunked encoding on the file and no size is set for content-length. I'm hypothesizing that something is going wrong on the large downloads, leading him to get a zero-length chunk before the end of the file. Here's what the headers look like for a direct request: http://www.grinderschool.com/videos/zfff5061b65ae00e8b21/KillsAids021.wmv GET /videos/zfff5061b65ae00e8b21/KillsAids021.wmv HTTP/1.1 Host: www.grinderschool.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.grinderschool.com/phpBB3/viewtopic.php?f=14&p=29468 Cookie: style_cookie=printonly; phpbb3_7c544_u=2; phpbb3_7c544_k=44b832912e5f887d; phpbb3_7c544_sid=e8852df42e08cc1b2250300c2897f78f; __utma=174624884.2719561324781918700.1251850714.1270986325.1270989003.575; __utmz=174624884.1264524375.411.12.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=low%20stakes%20poker%20videos; phpbb3_cmviy_k=; phpbb3_cmviy_u=2; phpbb3_cmviy_sid=d8df5c0943863004ca40ef9c392d371d; __utmb=174624884.4.10.1270989003; __utmc=174624884 Pragma: no-cache Cache-Control: no-cache HTTP/1.1 200 OK Date: Sun, 11 Apr 2010 12:57:41 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 Last-Modified: Sun, 04 Apr 2010 12:51:06 GMT Etag: "eb42d6-7d9b843-48368aa6dc280" Accept-Ranges: bytes Content-Length: 131708995 Keep-Alive: timeout=10, max=30 Connection: Keep-Alive Content-Type: video/x-ms-wmv And here's what they look like for the request answered by my script: http://www.grinderschool.com/download_video_test.php?t=KillsAids021&format=wmv GET /download_video_test.php?t=KillsAids021&format=wmv HTTP/1.1 Host: www.grinderschool.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cookie: style_cookie=printonly; phpbb3_7c544_u=2; phpbb3_7c544_k=44b832912e5f887d; phpbb3_7c544_sid=e8852df42e08cc1b2250300c2897f78f; __utma=174624884.2719561324781918700.1251850714.1270986325.1270989003.575; __utmz=174624884.1264524375.411.12.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=low%20stakes%20poker%20videos; phpbb3_cmviy_k=; phpbb3_cmviy_u=2; phpbb3_cmviy_sid=d8df5c0943863004ca40ef9c392d371d; __utmb=174624884.4.10.1270989003; __utmc=174624884 HTTP/1.1 200 OK Date: Sun, 11 Apr 2010 12:58:02 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 X-Powered-By: PHP/5.2.11 Content-Disposition: attachment; filename=KillsAids021.wmv Vary: Accept-Encoding Content-Encoding: gzip Keep-Alive: timeout=10, max=30 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: video/x-ms-wmv So the question is...what can I do to make downloads from the script work properly? Again, for 99% of users, it works as is (though I find it annoying now that no filesize is reported and thus that no time estimate can be computed about the download).

    Read the article

  • How do HTTP proxy caches decide between serving identity- vs. gzip-encoded resources?

    - by mrclay
    An HTTP server uses content-negotiation to serve a single URL identity- or gzip-encoded based on the client's Accept-Encoding header. Now say we have a proxy cache like squid between clients and the httpd. If the proxy has cached both encodings of a URL, how does it determine which to serve? The non-gzip instance (not originally served with Vary) can be served to any client, but the encoded instances (having Vary: Accept-Encoding) can only be sent to a clients with the identical Accept-Encoding header value as was used in the original request. E.g. Opera sends "deflate, gzip, x-gzip, identity, *;q=0" but IE8 sends "gzip, deflate". According to the spec, then, caches shouldn't share content-encoded caches between the two browsers. Is this true?

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >