Search Results

Search found 135 results on 6 pages for 'chunked'.

Page 3/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Memory usage on debian webserver keeps going up

    - by Steven De Groote
    my webserver is running apache 1.3.x for a PHP application, along with mysql on the same machine. Most of the time it runs fine, CPU usage still with nice margin, but somehow memory usage keeps growing throughout uptime. While it looks like it is chunked from time to time, I've had moments my server going down because it's out of memory. Restarting apache or mysql only reduced memusage by 100M. Attached is an overview of monthly memory usage. The 2 massive drops are server restarts after out-of-memory situations. http://imageshack.us/photo/my-images/51/memorymonth.png/ Any explanations for his behaviour or how I could solve this? Thanks! Steven

    Read the article

  • ISA caching with no cache-related info in response header

    - by Mike M. Lin
    From the documentation, I can't figure out what criteria an ISA server uses to figure out if a cached file is valid when no cache-related info is in the response header. Let's say I got this header in my response on Thu, 13 Jan 2011 18:43:35 GMT: HTTP/1.1 200 OK Date: Thu, 13 Jan 2011 18:43:35 GMT Server: Apache/2.2.3 (Red Hat) Content-Language: en X-Powered-By: Servlet/2.5 JSP/2.1 Keep-Alive: timeout=15 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=ISO-8859-1 There's no cache directive, no last-modified field, no expires field. How will the ISA server decide for how long to cache this response?

    Read the article

  • Django HttpResponseRedirect acting as proxy rather than 302

    - by Trevor Burnham
    I have a Django method that's returning return HttpResponseRedirect("/redirect-target") When running the server locally, if I visit the page that returns that redirect, I get the log output [17/Oct/2013 15:26:02] "GET /redirecter HTTP/1.1" 302 0 [17/Oct/2013 15:26:02] "GET /redirect-target HTTP/1.1" 404 0 as expected. But, when I visit that page in Chrome, the Network tab shows the request to /redirecter with the response from /redirect-target, rather than showing the 302. cURL does the same: $ curl -I -X GET http://localhost/redirecter HTTP/1.1 404 Not Found date: Thu, 17 Oct 2013 19:32:30 GMT connection: keep-alive transfer-encoding: chunked In production, the same Django code does show a 302 redirect in Chrome and cURL. What could be going on here? Is there some kind of Django setting that might be causing it to proxy the target rather than send a redirect when HttpResponseRedirect is used (but lie about it in the log)? Or is there a quirk on my system (OS X) that might cause localhost redirects to behave this way?

    Read the article

  • Nginx Cache-Control

    - by optixx
    Iam serving my static content with ngnix. location /static { alias /opt/static/blog/; access_log off; etags on; etag_hash on; etag_hash_method md5; expires 1d; add_header Pragma "public"; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } The resulting header looks like this: Cache-Control:public, must-revalidate, proxy-revalidate Cache-Control:max-age=86400 Connection:close Content-Encoding:gzip Content-Type:application/x-javascript; charset=utf-8 Date:Tue, 11 Sep 2012 08:39:05 GMT Etag:e2266fb151337fc1996218fafcf3bcee Expires:Wed, 12 Sep 2012 08:39:05 GMT Last-Modified:Tue, 11 Sep 2012 06:22:41 GMT Pragma:public Server:nginx/1.2.2 Transfer-Encoding:chunked Vary:Accept-Encoding Why is nginx sending 2 Cache-Control entries, could this be a problem for the clients?

    Read the article

  • YSLow says certain CSS are not gzipped

    - by rhand
    YSlow keeps on telling me files like http://www.example.com/wp-content/plugins/q-and-a/css/q-a-plus.css?ver=1.0.6.2 are not gzipped while the gzip test tool at Feed the Bot mentions I am all good: Compressed? Yes Compression type gzip Page size (Bytes) 32,493 Compressed size (Bytes) -1 Saving (Bytes) 32,494 Compression % 100% I added this to my .htaccess: # Gzip <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file .(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> #Deflate <ifmodule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </ifmodule> The header for the file mentioned states: CF-Cache-Status MISS CF-RAY 13945df90a9a0c1d-AMS Cache-Control public, max-age=2592000 Connection keep-alive Content-Encoding gzip Content-Type application/javascript Date Thu, 12 Jun 2014 07:34:38 GMT Expires Sat, 12 Jul 2014 07:34:38 GMT Last-Modified Thu, 21 Feb 2013 01:29:18 GMT Server cloudflare-nginx Transfer-Encoding chunked Vary Accept-Encoding Any ideas what I am missing here?

    Read the article

  • Strange PHP output buffering

    - by radek-k
    PHP: header('Content-type: text/plain'); for ($i=0; $i<10; $i++){ echo "$i\r\n"; ob_flush(); flush(); sleep(1); } I tried script above on 2 different servers. Both respond numbers 0...9 in every line. In case of first server each number is received every second. In case of second server there is no output for 10 seconds and entire output is displayed at once. What might be wrong int second case? I tried various uutput control Functions but it didn't help. Set of response headers in both cases is pretty much the same: HTTP/1.1 200 OK Date: Mon, 03 Jan 2011 19:21:21 GMT Server: Apache X-Powered-By: PHP/5.2.14 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/plain

    Read the article

  • How can I avoid a 302 for Fetch as Bot?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • Is a 302 redirect to a random URL from the homepage an SEO problem?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • ASP.NET MVC 2 and authentication using WIF (Windows Identity Foundation)

    - by Russ Cam
    Are there any decent examples of the following available: Looking through the WIF SDK, there are examples of using WIF in conjunction with ASP.NET using the WSFederationAuthenticationModule (FAM) to redirect to an ASP.NET site thin skin on top of a Security Token Service (STS) that user uses to authenticate (via supplying a username and password). If I understand WIF and claims-based access correctly, I would like my application to provide its own login screen where users provide their username and password and let this delegate to an STS for authentication, sending the login details to an endpoint via a security standard (WS-*), and expecting a SAML token to be returned. Ideally, the SessionAuthenticationModule would work as per the examples using FAM in conjunction with SessionAuthenticationModule i.e. be responsible for reconstructing the IClaimsPrincipal from the session security chunked cookie and redirecting to my application login page when the security session expires. Is what I describe possible using FAM and SessionAuthenticationModule with appropriate web.config settings, or do I need to think about writing a HttpModule myself to handle this? Alternatively, is redirecting to a thin web site STS where users log in the de facto approach in a passive requestor scenario?

    Read the article

  • Nginx is sending proxy saved conent in gzip format

    - by Sandeep Manne
    Hi I used config given in this http://www.webtatic.com/blog/2008/04/page-level-caching-with-nginx/ for page level caching of php content the problem is that the cached page is saving in gzip format and it returning same gzip content to browser. I need the o/p like this "12:15:37 12:15:47" (Its coming for 1st time when the page is not cached) after that if request is resend it is returning ‹??????34²26±24à23Œ¸¸?`Î9”??? (gzip response as I tried zcat its returning fine) Response Headers Server nginx/0.8.34 Date Wed, 17 Mar 2010 07:04:58 GMT Content-Type text/html Last-Modified Wed, 17 Mar 2010 07:04:20 GMT Transfer-Encoding chunked Connection keep-alive Vary Accept-Encoding Expires Wed, 17 Mar 2010 07:04:58 GMT Cache-Control max-age=0 Content-Encoding gzip Request Headers Host localhost User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.18) Gecko/2010021501 Ubuntu/9.04 (jaunty) Firefox/3.0.18 GTB6 Accept text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive

    Read the article

  • Seeking through a streamed MP3 file with HTML5 <audio> tag

    - by Kyle Slattery
    Hopefully someone can help me out with this. I'm playing around with a node.js server that streams audio to a client, and I want to create an HTML5 player. Right now, I'm streaming the code from node using chunked encoding, and if you go directly to the URL, it works great. What I'd like to do is embed this using the HTML5 <audio> tag, like so: <audio src="http://server/stream?file=123"> where /stream is the endpoint for the node server to stream the MP3. The HTML5 player loads fine in Safari and Chrome, but it doesn't allow me to seek, and Safari even says it's a "Live Broadcast". In the headers of /stream, I include the file size and file type, and the response gets ended properly. Any thoughts on how I could get around this? I certainly could just send the whole file at once, but then the player would wait until the whole thing is downloaded--I'd rather stream it.

    Read the article

  • Is there a way in CXF to disable the SoapCompressed header for debugging purposes?

    - by Don Branson
    I'm watching CXF service traffic using DonsProxy, and the CXF client sends an HTTP header "SoapCompressed": HttpHeadSubscriber starting... Sender is CLIENT at 127.0.0.1:2680 Packet ID:0-1 POST /yada/yada HTTP/1.1 Content-Type: text/xml; charset=UTF-8 SoapCompressed: true Accept-Encoding: gzip,gzip;q=1.0, identity; q=0.5, *;q=0 SOAPAction: "" Accept: */* User-Agent: Apache CXF 2.2 Cache-Control: no-cache Pragma: no-cache Host: localhost:9090 Connection: keep-alive Transfer-Encoding: chunked I'd like to turn SoapCompressed off in my dev environment so that I can see the SOAP on the wire. I've searched Google and grepped the CXF source code, but don't see anything in the docs or code that reference this. Any idea how to make the client send "SoapCompressed: off" instead, without routing it through Apache HTTPD or the like? Is there a way to configure it at the CXF client, in other words?

    Read the article

  • How to receive HTTP messages using Socket

    - by Poma
    I'm using Socket class for my web client. I can't use HttpWebRequest since it doesn't support socks proxies. So I have to parse headers and handle chunked encoding by myself. The most difficult thing is to determine length of content so I have to read it byte-by-byte. First I have to use ReadByte() to find last header ("\r\n\r\n" combination), then read chunk's size etc. But this approach has very poor performance. Can you suggest better solution? Maybe some open source examples or libraries that handle http request through sockets (not very big and complicated though, I'm a noob)

    Read the article

  • Apache/2.2.9, mod_perl/2.0.4: status_line doesn't seem to work

    - by Eugene
    Response is prepared this way: my $r = Apache2::RequestUtil->request; $r->status_line('500 Internal Server Error'); $r->send_cgi_header("Content-Type: text/html; charset=UTF-8\n\n"); print 'Custom error message'; Request: GET /test_page HTTP/1.1 Host: www.xxx.xxx Response: HTTP/1.1 200 OK Date: XXXXXXXXXX Server: Apache/xxxxxxxx Vary: Accept-Encoding Transfer-Encoding: chunked Content-Type: text/html; charset=UTF-8 44 Custom error message 0 Why response status is 200 and not 500?

    Read the article

  • WCF client hangs on response

    - by JohnIdol
    I have a WCF client (running on Win7) pointing to a WebSphere service. All is good from a test harness (a little test fixture outside my web app) but when my calls to the service originate from my web project one of the calls is extremely slow to deserialize (it takes up to 10 times longer) and not just the first time. I can see from fiddler that the response comes back quickly but then the WCF client hangs on the response itself for more than a minute before the next line of code is hit by the debugger, almost if the client was having trouble deserializing. This happens only if in the response I have a given pdf string, base64 encoded chunked. If for example the service raises a fault (this pdf string is not there) then the response is deserialized immediately. Again, If I send the exact same envelope through Soap-UI or from outside the web project all is good. I am at loss - What should I be looking for and is there some config setting that might do the trick? Any help appreciated!

    Read the article

  • How can chunks be allocated in a node.js stream in object mode all at once?

    - by Quentin Engles
    I can see how buffers, and strings can be sent as chunks, but I'm having a problem thinking about how streams can be dealt when working in object mode. Say I have a byte stream from an http request message. I want to take that message, parse, and then transform it into one big object. I already know how to parse the message. What I'm wondering is if the message is big so it has many chunks, but I want to make one object for the output how can I make sure the data event waits for the whole thing? Is this just a matter of not using the push method until the chunked data has finished being sent? That would then restrict the stream data output to a smaller object which I think I'm fine with for now. As an added condition the larger data will be reduced in size after the the transform.

    Read the article

  • AJAX Problem - No response text in FireFox, but ok in IE

    - by Taiba
    Hi, I am making a simple AJAX call to an external site. It works ok in IE, but in Firefox, not response text is returned. I think it might have something to do with the response being "chunked", but I'm not sure. Any ideas? Thanks. function loadXMLDoc() { var xmlhttp; var urlString = "http://drc.edeliver.com.au/ratecalc.asp?Pickup_Postcode=6025&Destination_Postcode=6055&Country=AU&Weight=100&Service_Type=STANDARD&Length=100&Width=100&Height=100&Quantity=2"; if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4) { window.alert(xmlhttp.responseText); } } xmlhttp.open("GET", urlString, true); xmlhttp.send(); }

    Read the article

  • How should I best store these files?

    - by Triton Man
    I have a set of image files, they are generally very small, between 5k and 100k. They can be any size though, upwards of 50mb but this is very rare. When these images are put into the system they are not ever modified. There is about 50 TB of these images total. They are currently chunked and stored in BLOBs in Oracle, but we want to change this since it requires special software to extract them. These images are access sometimes at a rate of over 100 requests per second among about 10 servers. I'm thinking about Hadoop or Cassandra, but I really don't know which would be best or how best to index them.

    Read the article

  • Access denied 403 errors after migrating my site

    - by AgA
    I've recently migrated my Joomla site from one shared hosting to another with Hostgator. GWT notified me about many 403 access denied pages. I've checked with Firebug too, and even though browser is displaying full page correctly but http return is 403. I've checked the home page but it's correctly returing 200 response. The same is shown by Fetch as Google in GWT(pasted this in the bottom). The site is 3 years old and I regularly do such migrations. I've copied the files and database "AS IS". I've even cleared all the caches but no luck. There is only one change: previously the site was primary domain but now it's add-on one. What could be the issue? This is how Googlebot fetched the page. Fetch as Google URL: http://MYSITE.COM/-----------------REMOVED.html Date: Thursday, June 20, 2013 at 10:32:14 PM PDT Googlebot Type: Web Download Time (in milliseconds): 3899 HTTP/1.1 403 Forbidden Date: Fri, 21 Jun 2013 05:32:15 GMT Server: Apache P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" Expires: Mon, 1 Jan 2001 00:00:00 GMT Cache-Control: post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: 0e4f6b53991c80cf39d57a6db58bb58d=ee2d880e8db0f1fc03c5612ea5a77004; path=/ Last-Modified: Fri, 21 Jun 2013 05:32:19 GMT Keep-Alive: timeout=5, max=75 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-gb" lang="en-gb" > <head> <base href="http://www.mysite.com/-----------------rajiv-yuva-shakthi-programme-finance-planning.html" /> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <meta name="robots" content="index, follow" /> <meta name="keywords" content="" /> <<<<<<TRIMMED>>>>>>>>>>>>>>

    Read the article

  • Have set Expiration time: Still getting "Query string present but no explicit expiration time"

    - by oligofren
    I have one local Apache instance running with mod_cache (+ disk & mem) enabled, and it seems to cache content from my appserver fine. My app server sets Expiration headers and Last-modified. Yet, when deploying on a production server with the same modules enabled, I am getting the following error in my logs: blablabla not cached. Reason: Query string present but no explicit expiration time Any clues on why Apache is not caching content? The only difference is the Apache version. Locally I am running 2.2. This is from my config CacheRoot "/var/cache/apache2/" CacheEnable disk / This is example output < HTTP/1.1 200 OK < Date: Mon, 19 Nov 2012 16:09:13 GMT < Server: Sun GlassFish Enterprise Server v2.1.1 < X-Powered-By: Servlet/2.5 < Expires: Tue Nov 20 05:00:00 CET 2012 < Last-Modified: Mon Nov 19 17:09:13 CET 2012 < Cache-Control: no-transform < Content-Type: application/x-javascript < Transfer-Encoding: chunked

    Read the article

  • how to enable iis 7 dynamic content compression?

    - by davidcl
    I've turned on dynamic content compression in IIS 7, but Fiddler is showing that my dynamic pages are still being served without content-encoding: gzip. Static content compression is working fine on the same servers. Not sure if it matters but most of the dynamic pages are coldfusion pages and we're also using the IIS URL rewriting module. This is from my applicationhost.config. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> ... <urlCompression doDynamicCompression="true" /> Here's a sample request: GET / HTTP/1.1 Host: web5.example.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive and response header: HTTP/1.1 200 OK Transfer-Encoding: chunked Content-Type: text/html; charset=UTF-8 Server: Microsoft-IIS/7.0 ... Date: Mon, 22 Feb 2010 20:59:36 GMT

    Read the article

  • Apache mod-pagespeed installation affects mod-spdy?

    - by tim peterson
    Recently my site (an https connection, running on an Amazon EC2 ubuntu apache2.2) has this issue where I need to load the page several times (3-4) before it will load normally without issue. It will then load normally as long as I keep loading pages regularly (every couple seconds). It will stall again if I don't load pages for a few minutes. It has nothing to do with my application because I don't have this problem with the exact same app codebase on my Apache installation on my laptop. The only things to my knowledge that I've changed is that I recently installed mod_spdy and then a few weeks later I installed mod_pagespeed, https://developers.google.com/speed/pagespeed/mod. However, I have since turned mod_pagespeed off by setting its pagespeed.conf to mod_pagespeed off. Unfortunately, that didn't solve the problem. The line below is how every of last 10 lines of my error.log look: # tail -f /var/log/apache2/error.log ... [32728:32729:ERROR:mod_spdy.cc(162)] request->chunked == 1 in request GET / HTTP/1.1 [Sat Jun 02 04:50:08 2012] [warn] [client 50.136.93.153] [stream 5] [32728:32729:WARNING:http_to_spdy_filter.cc(113)] HttpToSpdyFilter is not the last filter in the chain: chunk any thoughts? thank you, tim

    Read the article

  • cURL hangs trying to upload file from stdin

    - by SidneySM
    I'm trying to PUT a file with cURL. This hangs: curl -vvv --digest -u user -T - https://example.com/file.txt < file This does not: curl -vvv --digest -u user -T file https://example.com/file.txt What's going on? * About to connect() to example.com port 443 (#0) * Trying 0.0.0.0... connected * Connected to example.com (0.0.0.0) port 443 (#0) * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DHE-RSA-AES256-SHA * Server certificate: * subject: serialNumber=jJakwdOewDicmqzIorLkKSiwuqfnzxF/, C=US, O=*.example.com, OU=GT01234567, OU=See www.example.com/resources/cps (c)10, OU=Domain Control Validated - ExampleSSL(R), CN=*.example.com * start date: 2010-01-26 07:06:33 GMT * expire date: 2011-01-28 11:22:07 GMT * common name: *.example.com (matched) * issuer: C=US, O=Equifax, OU=Equifax Secure Certificate Authority * SSL certificate verify ok. * Server auth using Digest with user 'user' > PUT /file.txt HTTP/1.1 > User-Agent: curl/7.19.4 (universal-apple-darwin10.0) libcurl/7.19.4 OpenSSL/0.9.8l zlib/1.2.3 > Host: example.com > Accept: */* > Transfer-Encoding: chunked > Expect: 100-continue > < HTTP/1.1 100 Continue

    Read the article

  • litespeed issue with content-type

    - by sandeep.s85
    I am running magento with litespeed. The problem I am facing is that ajax call is being made of which header is set as x-json, but lightspeed is setting another header of text/html content type I've checked that page with apache and everything is working fine. I checked the response headers with apache and litespeed and here are they: With apache: HTTP/1.1 200 OK Date: Fri, 07 Sep 2012 05:58:47 GMT Server: Apache Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: frontend=164b21c64808a05e806027bdbd4d745d; expires=Fri, 07-Sep-2012 06:58:48 GMT; path=/; domain=mydomain.com; httponly Connection: close Transfer-Encoding: chunked Content-Type: application/x-json With litespeed: HTTP/1.1 200 OK Date: Fri, 07 Sep 2012 06:10:55 GMT Server: LiteSpeed Connection: close Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: frontend=164b21c64808a05e806027bdbd4d745d; expires=Fri, 07-Sep-2012 07:10:55 GMT; path=/; domain=mydomain.com; httponly Content-Type: text/html; charset=UTF-8 Content-Length: 474 Vary: User-Agent I've also added application/json to mime.properties of litespeed,restarted it but that did not work. Here is the screenshot

    Read the article

  • What's wrong with this HTTP POST request?

    - by bigboy
    I'm trying to fuzz a server using the Sulley fuzzing framework. I observe the following stream in Wireshark. The error talks about a problem with JSON parsing, however, when I try the same HTTP POST request using Google Chrome's Postman extension, it succeeds. Can anyone please explain what could be wrong about this HTTP POST request? The JSON seems valid. POST /restconf/config HTTP/1.1 Host: 127.0.0.1:8080 Accept: */* Content-Type: application/yang.data+json { "toaster:toaster" : { "toaster:toasterManufacturer" : "Geqq", "toaster:toasterModelNumber" : "asaxc", "toaster:toasterStatus" : "_." }} HTTP/1.1 400 Bad Request Server: Apache-Coyote/1.1 Content-Type: */* Transfer-Encoding: chunked Date: Sat, 07 Jun 2014 05:26:35 GMT Connection: close 152 <?xml version="1.0" encoding="UTF-8" standalone="no"?> <errors xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf"> <error> <error-type>protocol</error-type> <error-tag>malformed-message</error-tag> <error-message>Error parsing input: Root element of Json has to be Object</error-message> </error> </errors> 0

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >