Search Results

Search found 51988 results on 2080 pages for 'http headers'.

Page 39/2080 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Why is Apache + Rails is spitting out two status headers for code 500?

    - by Daniel Beardsley
    I have a rails app that is working fine except for one thing. When I request something that doesn't exist (i.e. /not_a_controller_or_file.txt) and rails throws a "No Route matches..." exception, the response is this (blank line intentional): HTTP/1.1 200 OK Date: Thu, 02 Oct 2008 10:28:02 GMT Content-Type: text/html Content-Length: 122 Vary: Accept-Encoding Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Status: 500 Internal Server Error Content-Type: text/html <html><body><h1>500 Internal Server Error</h1></body></html> I have the ExceptionLogger plugin in /vendor, though that doesn't seem to be the problem. I haven't added any error handling beyond the custom 500.html in public (though the response doesn't contain that HTML) and I have no idea where this bit of html is coming from. So Something, somewhere is adding that HTTP/1.1 200 status code too early, or the Status: 500 too late. I suspect it's Apache because I get the appropriate HTTP/1.1 500 header (at the top) when I use Webrick. My production stack is as follows: Apache 2 Mongrel (5 instances) RubyOnRails 2.1.1 (happens in both 1.2 and 2.1.1) I forgot to mention, the error is caused by a "no route matches..." exception

    Read the article

  • RESTful issue with data access when using HTTP DELETE method ...

    - by Wilhelm Murdoch
    I'm having an issue accessing raw request information from PHP when accessing a script using the HTTP DELETE directive. I'm using a JS front end which is accessing a script using Ajax. This script is actually part of a RESTful API which I am developing. The endpoint in this example is: http://api.site.com/session This endpoint is used to generate an authentication token which can be used for subsequent API requests. Using the GET method on this URL along with a modified version of HTTP Basic Authentication will provide an access token for the client. This token must then be included in all other interactions with the service until it expires. Once a token is generated, it is passed back to the client in a format specified by an 'Accept' header which the client sends the service; in this case 'application/json'. Upon success it responds with an HTTP 200 Ok status code. Upon failure, it throws an exception using the HTTP 401 Authorization Required code. Now, when you want to delete a session, or 'log out', you hit the same URL, but with the HTTP DELETE directive. To verify access to this endpoint, the client must prove they were previously authenticated by providing the token they want to terminate. If they are 'logged in', the token and session are terminated and the service should respond with the HTTP 204 No Content status code, otherwise, they are greeted with the 401 exception again. Now, the problem I'm having is with removing sessions. With the DELETE directive, using Ajax, I can't seem to access any parameters I've set once the request hits the service. In this case, I'm looking for the parameter entitled 'token'. I look at the raw request headers using Firebug and I notice the 'Content-Length' header changes with the size of the token being sent. This is telling me that this data is indeed being sent to the server. The question is, using PHP, how the hell to I access parameter information? It's not a POST or GET request, so I can't access it as you normally would in PHP. The parameters are within the content portion of the request. I've tried looking in $_SERVER, but that shows me limited amount of headers. I tried 'apache_request_headers()', which gives me more detailed information, but still, only for headers. I even tried 'file_get_contents('php://stdin');' and I get nothing. How can I access the content portion of a raw HTTP request? Sorry for the lengthy post, but I figured too much information is better than too little. :)

    Read the article

  • wget hangs in http request sent awaiting response in some sites

    - by gkr
    Using Ubuntu 12.04. wget hangs in http request sent, awaiting response... in some sites. Browser's are not opening sites that are failed in wget. But in WinXP everything works. This works gkr@gkr-desktop:~/Documents/curl$ wget google.com --2012-06-12 21:29:37-- http://google.com/ Resolving google.com (google.com)... 74.125.236.174, 74.125.236.160, 74.125.236.161, ... Connecting to google.com (google.com)|74.125.236.174|:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: http://www.google.com/ [following] --2012-06-12 21:29:38-- http://www.google.com/ Resolving www.google.com (www.google.com)... 74.125.236.179, 74.125.236.180, 74.125.236.176, ... Connecting to www.google.com (www.google.com)|74.125.236.179|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://www.google.co.in/ [following] --2012-06-12 21:29:38-- http://www.google.co.in/ Resolving www.google.co.in (www.google.co.in)... 74.125.236.184, 74.125.236.191, 74.125.236.183, ... Connecting to www.google.co.in (www.google.co.in)|74.125.236.184|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `index.html.3' [ ] 13,383 --.-K/s in 0.04s 2012-06-12 21:29:39 (308 KB/s) - `index.html.3' saved [13383] gkr@gkr-desktop:~/Documents/curl$ This site just stops/hangs in awaiting response. gkr@gkr-desktop:~/Documents/curl$ wget grooveshark.com --2012-06-12 21:27:29-- http://grooveshark.com/ Resolving grooveshark.com (grooveshark.com)... 8.20.213.76 Connecting to grooveshark.com (grooveshark.com)|8.20.213.76|:80... connected. HTTP request sent, awaiting response... ^C gkr@gkr-desktop:~/Documents/curl$ Thanks

    Read the article

  • Gmail sends bulk messages sent by postfix to spam - spf, rDNS are set up (headers inside)

    - by snitko
    here are the headers of the blocked messages (actual domain replaced with domain.com, ip address with n.n.n.n and gmail account name with person.account): Delivered-To: [email protected] Received: by 10.216.89.137 with SMTP id c9cs247685wef; Tue, 6 Dec 2011 16:06:37 -0800 (PST) Received: by 10.224.199.134 with SMTP id es6mr14447757qab.2.1323216395590; Tue, 06 Dec 2011 16:06:35 -0800 (PST) Return-Path: <[email protected]> Received: from mail.domain.com (domain.com. [n.n.n.n]) by mx.google.com with ESMTP id b16si7471407qcv.131.2011.12.06.16.06.35; Tue, 06 Dec 2011 16:06:35 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates n.n.n.n as permitted sender) client-ip=n.n.n.n; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates n.n.n.n as permitted sender) [email protected] Received: by mail.domain.com (Postfix, from userid 5001) id 26ADE381E3; Tue, 6 Dec 2011 19:06:35 -0500 (EST) Received: from domain.com (domain.com [127.0.0.1]) by mail.domain.com (Postfix) with ESMTP id 0148638030 for <[email protected]>; Tue, 6 Dec 2011 19:06:31 -0500 (EST) Date: Tue, 06 Dec 2011 19:06:31 -0500 From: DomainApp <[email protected]> Reply-To: [email protected] To: [email protected] Message-ID: <[email protected]> Subject: Roman Snitko says hi Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-No-Spam: True Precedence: bulk List-Unsubscribe: [email protected] Messages go to Spam folder on various gmail accounts, so it's not a coincidence. I followed all gmail guides on sending bulk emails from here https://mail.google.com/support/bin/answer.py?hl=en&answer=81126. I also checked my ip-address here http://www.dnsblcheck.co.uk/ and it's NOT on the blacklists. Thus I have two questions: What may be the possible reason for the messages to go to Spam folder? Is there any way to contact Google and ask them what causes this? Update: I have set up openDKIM on my server, everything works, gmail message headers say that dkim=pass, which means everything is set up correctly. Messages still end up in Spam folder.

    Read the article

  • Windows 2008 IIS 7.0 HTTP to HTTPS Redirect -- Versus IIS 6.0 Mechanism

    - by Dan7el
    This topic, creating a mechanism for redirection from HTTP to HTTPS on a Windows 2008 server running IIS 7.0 is a much written-about topic on the Internet. How this is done is really not so much my issue. My issue is more of explaining why this can't be done with the standard HTTP Redirect module that ships with Windows 2008 IIS 7.0. Instead, there are other methods needed that are more arduous. First, the IIS 6.0 method requires no externally available modules nor does it require any additional modifications to the web.config or any type of other development effort. It's outlined here: http://blogs.microsoft.co.il/blogs/dorr/archive/2009/01/13/how-to-force-redirection-from-http-to-https-on-iis-6-0.aspx And, you can see the basic steps are to run the snap-in, get the properties on the site, and do some modifications. Presto, you have the HTTP -- HTTP redirect setup. Now, on the IIS 7.0 platform, it doesn't seem this simple. An initial search found the following site: http://www.sslshopper.com/iis7-redirect-http-to-https.html Which has two separate approcates: 1. Involves installing a separately available Microsoft module -- URL Rewrite Module, and then adding XML to the web.config. 2. Custom Error Page. ...there might be other methods, but these are the basic ones and the first is listed as the primary method. But wait...There exists on the IIS 7.0 an HTTP Redirect Module. So...why can't I use the HTTP Redirect Module to do this very thing? This is really my big question. I need to know this because my management is going to insist I use the HTTP Redirect Module and set up the HTTP to HTTPS redirect in a similar fashion to how we do in IIS 6.0. Can someone please explain to me, in clean, simple, easy to understand, terms that both I and my management can understand as to why I need to go get the URL Rewrite Module and install that on the server and make the web.config changes suggested by the article instead of simply using the HTTP Redirect module that's already installed on the site? Thanks a bunch.

    Read the article

  • using tcpdump to display XML API requests without headers or ack packets

    - by Carmageddon
    I need assistance, I am trying to use tcpdump in order to capture API requests and responses between two servers, so far I have the following command: tcpdump -iany -tpnAXs0 host xxx.xxx.xxx.xxx and port 6666 My problem is, that the output is still hard to read, because it sends the Headers, and the ack packets. I would like to remove those and only see the XML bodies. I tried to use grep -v, but apparently this is all one request, so it filters the entire thing... Thanks!

    Read the article

  • apt-get commands pausing at 'Waiting for headers'

    - by Matt
    I have a VM running Ubuntu Server 9.10 running a basic web server setup. Whenever I run an apt function it will pause for around 1 minute at 'Waiting for headers...'. It will eventually clear through and continue as normal but it is a bit of an annoyance. Everything else on the server seems to run fine. Any ideas?

    Read the article

  • Finding BCC in Internet mail headers

    - by dangowans
    I am running Outlook 2010 connected to an Exchange 2003 server. Often times, the spam that I received is sent to "undisclosed-recipients". I'm guessing that's because my email address (or an email address for a group I am part of) is in the BCC field. Is there a way to find out what BCC address was used to reach me? I looked at the Internet Headers for the message, but am not seeing "Envelope-to", described in a similar question.

    Read the article

  • Idempotent Powershell word search/replace across documents with headers, change tracking, etc

    - by user61633
    I've found one or two guides to doing a word search and replace across multiple documents with powershell. They work well on simple documents. However, the script ignores text in headers and footers; and if "track changes" is enabled, it replaces text which has already been replaced, resulting in multiple copies of the new text if I run the script more than once on the same file. Any clues as to how I can avoid these undesirable behaviors and make this script robust?

    Read the article

  • Is there a simple LDAP-to-HTTP gateway out there?

    - by larsks
    We have a local LDAP directory that provides basic contact information about our user community. We would like to integrate this into some third-party hosted services that allow us to implement widgets that run arbitrary Javascript. In order to connect Javascript to our LDAP directory, I would like to set up a simple LDAP-to-HTTP proxy that would accept HTTP GET requests, translate them into an appropriate LDAP query, and respond with directory information as JSON-encoded data. In an ideal world, something like this: GET /[email protected] Would get me something like this: { "cn": "Bob Person", "title": "System Administrator", "sn": "Person", "mail": "[email protected]", "telepehoneNumber": "617-555-1212", "givenName": "Bob" } (And this obviously assumes that the web application has locally configured information about what base DN to use, how to authenticate, etc). I guess I could write one...but surely something like this already exists? UPDATE The consensus seems to be that there isn't a pre-existing solution out there and that I should just get off my lazy derriere and write one. So I did, and it's here. It's not especially pretty, but it works for my prototyping and I figure maybe someone else will find it useful someday.

    Read the article

  • Simple Linux program that takes any HTTP/HTTPS request and returns a single page?

    - by ultrasawblade
    I have a Linux box operating as router. There's a NIC that's connected to the internet (WAN), a NIC connected to an 8-port GbE switch (LAN), and a NIC connected to a Linksys wireless N-router (WLAN). Routing between everything is working perfectly. I have security completely disabled on the wireless router, but the WLAN NIC is firewalled such that it will only accept DNS queries and PPTP VPN connections. Currently HTTP/HTTPS traffic and everything else is blocked. I would like to run something that listens on port 80/443 of the WLAN NIC, and, for non VPN'ed connections, given any HTTP/HTTPS request it will return a single webpage saying "Unauthenticated" and explain how to sign into the VPN. A transparent proxy seems to be what I need, but my searches all seem to direct me to Squid, which is already running on my server and seems overkill for this simple task. Is there a simpler, lightweight program out there that does just this or should I just suck it up and run two instances of Squid (or figure out how to configure it)? Or, is this entire VPN thing I'm doing complete nonsense and I should just enable encryption on the wireless router?

    Read the article

  • Passing Custom Headers to Ajax request on Select2

    - by Sutikshan Dubey
    We are trying to implement Ajax Remote data loading in Select2:- $scope.configPartSelect2 = { minimumInputLength: 3, ajax: { url: "/api/Part", // beforeSend: function (xhr) { xhr.setRequestHeader('Authorization-Token', http.defaults.headers.common['Authorization-Token']); }, // headers: {'Authorization-Token': http.defaults.headers.common['Authorization-Token']}, data: function (term, page) { return {isStockable: true}; }, results: function (data, page) { // parse the results into the format expected by Select2. // since we are using custom formatting functions we do not need to alter remote JSON data return { results: data }; } } }; We are using AngularJS. With each Http request we have set it's default to have our Authtoken as header. But somehow it is not working in conjunction with Select2 Ajax request. In above code, commented code are my failed attempts.

    Read the article

  • Passing Custom headers from .Net 1.1 client to a WCF service

    - by sreejith
    I have a simple wcf service which uses basicHttp binding, I want to pass few information from client to this service via custom SOAP header. My client is a .net application targetting .Net 1.1, using visual studio I have created the proxy( Added a new web reference pointing to my WCF service) I am able to call methods in the WCF service but not able to pass the data in message header. Tried to override "GetWebRequest" and added custom headers in the proxy but for some reason when I tried to access the header using "OperationContext.Current.IncomingMessageHeaders.FindHeader" it is not thier. Any idea how to solve this prob? This is how I added the headers protected override System.Net.WebRequest GetWebRequest(Uri uri) { HttpWebRequest request; request = (HttpWebRequest)base.GetWebRequest(uri); request.Headers.Add("tesData", "test"); return request; }

    Read the article

  • What is the recommended method of HTTP Redirection from multiple URLs to one URL?

    - by ChrisHDog
    I have a website that has a number of URLs that people use to connect to that site (uses the bindings on the IIS website and everything works as intended): http://www.sample.com http://sample.com https://www.sample.com http://xyz.sample.com http://oldurl.com Now what I want to do is have all of the URLs go to https://www.sample.com - so if you type in "http://xyz.sample.com" or "sample.com" you should go to https://www.sample.com The question is what is the best mechanism to do this? I have one possible solution (which I will put as an answer to this question), but I get the feeling that there might be another, better solution available.

    Read the article

  • X-Domain and P3P Headers

    - by Jackson
    Hi, I have a website A.com and a domain at B.com with a widget inside an iframe getting data from A.com. I want to allow x-domain cookies to be passed from a.com to inside the iframe using ASP.NET My understanding is that - I can do this in IE using P3P Headers - such that the A.com cookie is passed to the iframe and session | cookie data is preserved. The P3P headers have to be sent from the A.com and from the iframe. Is this correct ? In dev, my understanding is if I "accept all cookies" in IE - then P3P headers won't matter anyway and so this should all just work. If I put on Medium Security then P3P is required.

    Read the article

  • Errors when connecting to HTTPS using HTTP::Net routines (Ruby on Rails)

    - by jaycode
    Hi all, the code below explains the problem in detail #this returns error Net::HTTPBadResponse url = URI.parse('https://sitename.com') response = Net::HTTP.start(url.host, url.port) {|http| http.get('/remote/register_device') } #this works url = URI.parse('http://sitename.com') response = Net::HTTP.start(url.host, url.port) {|http| http.get('/remote/register_device') } #this returns error Net::HTTPBadResponse response = Net::HTTP.post_form(URI.parse('https://sitename.com/remote/register_device'), {:text => 'hello world'}) #this returns error Errno::ECONNRESET (Connection reset by peer) response = Net::HTTP.post_form(URI.parse('https://sandbox.itunes.apple.com/verifyReceipt'), {:text => 'hello world'}) #this works response = Net::HTTP.post_form(URI.parse('http://sitename.com/remote/register_device'), {:text => 'hello world'}) So... how do I send POST parameters to https://sitename.com or https://sandbox.itunes.apple.com/verifyReceipt in this example? Further information, I am trying to get this working in Rails: http://developer.apple.com/iphone/library/documentation/NetworkingInternet/Conceptual/StoreKitGuide/VerifyingStoreReceipts/VerifyingStoreReceipts.html#//apple_ref/doc/uid/TP40008267-CH104-SW1

    Read the article

  • Using HTTP status codes to reflect success/failure of Web service request?

    - by jgarbers
    I'm implementing a Web service that returns a JSON-encoded payload. If the service call fails -- say, due to invalid parameters -- a JSON-encoded error is returned. I'm unsure, however, what HTTP status code should be returned in that situation. On one hand, it seems like HTTP status codes are for HTTP: even though an application error is being returned, the HTTP transfer itself was successful, suggesting a 200 OK response. On the other hand, a RESTful approach would seem to suggest that if the caller is attempting to post to a resource, and the JSON parameters of the request are invalid somehow, that a 400 Bad Request is appropriate. I'm using Prototype on the client side, which has a nice mechanism for automatically dispatching to different callbacks based on HTTP status code (onSuccess and onFailure), so I'm tempted to use status codes to indicate service success or failure, but I'd be interested to hear if anyone has opinions or experience with common practice in this matter. Thanks!

    Read the article

  • C++/MFC: Handling multiple CListCtrl's headers HDN_ITEMCLICK events

    - by raph.amiard
    I'm coding an MFC application in which i have a dialog box with multiple CListCtrls in report view. I want one of them to be sortable. So i handled the HDM_ITEMCLICK event, and everything works just fine .. Except that if i click on the headers of another CListCtrl, it does sort the OTHER CListCtrl, which does look kind of dumb. This is apparently due to the fact that headers have an ID of 0, which make the entry in the message map look like this : ON_NOTIFY(HDN_ITEMCLICK, 0, &Ccreationprogramme::OnHdnItemclickList5) But since all the headers have an id of zero, apparently every header of my dialog sends the message. Is there an easy way around this problem ?

    Read the article

  • Building a webserver, client doesn't acknowledge HTTP 200 OK frame.

    - by Evert
    Hi there, I'm building my own webserver based on a tutorial. I have found a simple way to initiate a TCP connection and send one segment of http data (the webserver will run on a microcontroller, so it will be very small) Anyway, the following is the sequence I need to go through: receive SYN send SYN,ACK receive ACK (the connection is now established) receive ACK with HTTP GET command send ACK send FIN,ACK with HTTP data (e.g 200 OK) receive FIN,ACK <- I don't recieve this packet! send ACK Everything works fine until I send my acknowledgement and HTTP 200 OK message. The client won't send an acknowledgement to those two packages and thus no webpage is being displayed. I've added a pcap file of the sequence how I recorded it with wireshark. Pcap file: http://cl.ly/5f5 (now it's the right data) All sequence and acknowledgement numbers are correct, checksum are ok. Flags are also right. I have no idea what is going wrong.

    Read the article

  • Custom http service responds fine to local IP address but NOT to localhost or 127.0.0.1

    - by Scrappydog
    I'm trying to connect to a custom http service written by another developer. The service responds fine on a local IP address and port number. Such as: http://10.1.1.1:1234 but it does NOT respond to http://localhost:1234 or http://127.0.0.1:1234 The service is a simple single function application written in VC++ that takes an http post string and returns another string. I'm trying to all it from C# using HttpWebRequest.GetResponse, but I can reproduce the same problem manually from a web browser... Test environment is Windows 2008 Server. Bottom line I'm looking for some troubleshooting tips to help the other developer fix his code.

    Read the article

  • What HTTP headers are required to refresh a page on back button.

    - by cantabilesoftware
    I'm trying to get a page to refresh when navigated to from the back button. From what I understand after reading around a bit I should just need to mark the page as uncacheable but I can't get any browsers to refresh the page. These are the headers I've currently got: Cache-Control:no-cache Connection:keep-alive Content-Encoding:gzip Content-Length:1832 Content-Type:text/html; charset=utf-8 Date:Mon, 07 Jun 2010 14:05:39 GMT Expires:-1 Pragma:no-cache Server:Microsoft-IIS/7.5 Vary:Accept-Encoding Via:1.1 smoothwall:800 (squid/2.7.STABLE6) X-AspNet-Version:2.0.50727 X-AspNetMvc-Version:2.0 X-Cache:MISS from smoothwall X-Powered-By:ASP.NET Why would the browser pull this page from it's browser history and not refresh it?

    Read the article

  • Web.config WordPress rewrite rules next to Magento

    - by Flo
    I've installed Magento on IIS in folder: E:\mydomain\wwwroot (I already have it all running correctly). I have no deeper folder magento, I placed all files directly in the wwwroot folder, so: wwwroot\app wwwroot\downloader wwwroot\errors wwwroot\includes etc... UPDATE: since I'm on IIS my .htaccess is ignored completely and my web.config rules are used instead. Here's my web.config in folder e:\mydomain\wwwroot: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="Magento SEO: remove index.php from URL"> <match url="^(?!index.php)([^?#]*)(\\?([^#]*))?(#(.*))?" /> <conditions> <add input="{URL}" pattern="^/(media|skin|js)/" ignoreCase="false" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" ignoreCase="false" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" ignoreCase="false" negate="true" /> </conditions> <action type="Rewrite" url="index.php/{R:0}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> Next, I wanted to install WordPress. I unzipped all files in folder e:\mydomain\wwwroot\wordpress Browsed to www.mydomain.com/wordpress/wp-admin/install.php, where I configured everything for my database. Everything was installed correctly. I then navigate to http://www.mydomain.com/wordpress/wp-login.php where I type my credentials. I seem to be logged in and am redirected to http://www.mydomain.com/wordpress/wp-admin/ But there I receive an empty page. I enabled detailed error message in IIS following this article: http://www.iis.net/learn/troubleshoot/diagnosing-http-errors/how-to-use-http-detailed-errors-in-iis I also checkec with Fiddler and see that I receive a 500 error: GET /wordpress/wp-admin/ HTTP/1.1 Host: www.mydomain.com Connection: keep-alive Cache-Control: max-age=0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.76 Safari/537.36 Referer: http://www.mydomain.com/wordpress/wp-login.php Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8,nl;q=0.6 Cookie: wordpress_fabec4083cf12d8de89c98e8aef4b7e3=floran%7C1381236774%7C2d8edb4fc6618f290fadb49b035cad31; wordpress_test_cookie=WP+Cookie+check; wordpress_logged_in_fabec4083cf12d8de89c98e8aef4b7e3=floran%7C1381236774%7Cbf822163926b8b8df16d0f1fefb6e02e HTTP/1.1 500 Internal Server Error Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: PHP/5.4.14 X-Powered-By: ASP.NET Date: Sun, 06 Oct 2013 12:56:03 GMT Content-Length: 0 My WordPress web.config in folder e:\mydomain\wwwroot\wordpress contains: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="wordpress" patternSyntax="Wildcard"> <match url="*"/> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true"/> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true"/> </conditions> <action type="Rewrite" url="index.php"/> </rule></rules> </rewrite> </system.webServer> </configuration> I also want my WordPress articles to be available on www.mydomain.com/blog instead of www.mydomain.com/wordpress Ofcourse my admin links for Magento and Wordpress should also work. How can I configure my web.config files to achieve the above?

    Read the article

  • What does it mean when a User-Agent has another User-Agent inside it?

    - by Erx_VB.NExT.Coder
    Basically, sometimes the user-agent will have its normal user-agent displayed, then at the end it will have teh "User-Agent: " tag displayed, and right after it another user-agent is shown. Sometimes, the second user-agent is just appended to the first one without the "User-Agent: " tag. Here are some samples I've seen: The first few contain the "User-Agent: " tag in the middle somewhere, and I've changed its font to make it easier to to see. Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Trident/4.0; GTB6; User-agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1); SLCC1; .NET CLR 2.0.50727; .NET CLR 3.0.04506) Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; GTB6; MRA 5.10 (build 5339); User-agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1); .NET CLR 1.1.4322; .NET CLR 2.0.50727) Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; User-agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1); .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; User-agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1); .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152) Here are some without the "User-Agent: " tag in the middle, but just two user agents that seem stiched together. Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1); .NET CLR 3.5.30729) Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; GTB6; IPMS/6568080A-04A5AD839A9; TCO_20090713170733; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1); InfoPath.2) Now, just to add a few notes to this. I understand that the "User-Agent: " tag is normally a header, and what follows a typical "User-Agent: " string sequence is the actual user agent that is sent to servers etc, but normally the "User-Agent: " string should not be part of the actual user agent, that is more like the pre-fix or a tag indicating that what follows will be the actual user agent. Additionally, I may have thought, hey, these are just two user agents pasted together, but on closer inspection, you realize that they are not. On all of these dual user agent listings, if you look at the opening bracket "(" just before the "compatible" keyword, you realize the pair to that bracket ")" is actually at the very end, the end of the second user agent. So, the first user agents closing bracket ")" never occurs before the second user agent begins, it's always right at the end, and therefore, the second user agent is more like one of the features of the first user agent, like: "Trident/4.0" or "GTB6" etc etc... The other thing to note that the second user agent is always MSIE 6.0 (Internet Explorer 6.0), interesting. What I had initially thought was it's some sort of Virtual Machine displaying the browser in use & the browser that is installed, but then I thought, what'd be the point in that? Finally, right now, I am thinking, it's probably soem sort of "Compatibility View" type thing, where even if MSIE 7.0 or 8.0 is installed, when my hypothetical the "Display In Internet Explorer 6.0" mode is turned on, the user agent changes to something like this. That being, IE 8.0 is installed, but is rendering everything as IE 6.0 would. Is there or was there such a feature in Internet Explorer? Am I on to something here? What are your thoughts on this? If you have any other ideas, please feel free to let us know. At the moment, I'm just trying to understand if these are valid User Agents, or if they are invalid. In a list of about 44,000 User Agents, I've seen this type of Dual User Agent about 400 times. I've closely inspected 40 of them, and every single one had MSIE 6.0 as the "second" user agent (and the first user agent a higher version of MSIE, such as 7 or 8). This was true for all except one, where both user agents were MSIE 8.0, here it is: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; Mozilla/4.0 (compatible; MSIE 8.0; Win32; GMX); GTB0.0) This occured once in my 40 "close" inspections. I've estimated the 400 in 44,000 by taking a sample of the first 4,400 user agents, and finding 40 of these in the MSIE/Windows user agents, and extrapolated that to estimate 40. There were also similar things occuring for non MSIE user agents where there were two Mozilla's in one user agent, the non MSIE ones would probably add another 30% on top of the ones I've noted. I can show you samples of them if anyone would like. There we have it, this is where I'm at, what do you guys think?

    Read the article

  • What is recommended minimum object size for gzip performance benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwith offloaded from origin), or is there a performance gain in doing so?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >