Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 22/232 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • My Apache access log contains weird GET and POST requests, what can I do?

    - by Konstantin
    My Apache access log contains weird GET and POST requests, is it possible to examine which of these are harmful? For example: 114.232.151.185 - - [11/Jun/2014:20:11:33 +0200] "GET http://hotel.qunar.com/render/hoteldiv.jsp?&__jscallback=XQScript_4 HTTP/1.1" 404 1167 103.30.175.10 - - [12/Jun/2014:08:35:17 +0200] "GET /vtigercrm/ HTTP/1.1" 404 1034 69.174.245.163 - - [14/Jun/2014:01:22:38 +0200] "GET /w00tw00t.at.blackhats.romanian.anti-sec:) HTTP/1.1" 404 1034 69.174.245.163 - - [14/Jun/2014:01:22:38 +0200] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 404 1034 94.74.229.110 - - [16/Jun/2014:18:46:43 +0200] "GET http://www.msftncsi.com/ncsi.txt HTTP/1.1" 404 1037 80.73.11.164 - - [20/Jun/2014:01:52:14 +0200] "POST /cgi-bin/php?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%66%75%6E%63%74%69%6F%6E%73%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6E%6F%6E%65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65%5F%72%65%64%69%72%65%63%74%3D%30+%2D%64+%63%67%69%2E%72%65%64%69%72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3D%30+%2D%6E HTTP/1.1" 404 1034 162.253.66.76 - - [24/Jun/2014:23:54:30 +0200] "GET /rutorrent HTTP/1.1" 400 226 122.226.223.69 - - [25/Jun/2014:01:14:27 +0200] "GET http://todd0738.gotoip4.com//hello.html HTTP/1.1" 404 1041 My Apache access log file: http://pastebin.com/2x0naQBK

    Read the article

  • If I implement a web-service, how do I respond to POST requests with JSON?

    - by Vova Stajilov
    I have to make a rather complex system for my diploma work. Logically it will consist of the following components: Database Web-service Management with web interface Client iOS application that will consume web-service I decided to implement all the first three components under .NET. Firstly I will create a database depending on the information load - this is clear. But then I need a web-service that will return data in JSON format for iOS clients to consume - that's obvious and not that hard to implement. For this I will use WCF technology. Now I have a question, if I implement the web-service, how will I be able to respond to POST requests with JSON? It probably involves WCF JSON or something related? But I also need some web pages as admin part, so will this web-application be able to consume my centralized web-services as well or I should develop it separately? I just want my web service to act like a set of controllers. There is a related question here but this doesn't quite reflect my question.

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0?

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). xxxx:xxxx:xxxx/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else?

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0? Link up/down every hour or 2

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). [mymacaddress]/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) It's being logged every hour. I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. There are some tunable SCTP parameters but it's not something I'm familiar with. Do I have to add changes to /etc/system? Looks like sctp_heartbeat_interval might be what I need to change? If it makes any difference, I have a few solaris zones running on this server, each with their own IP address on a virtual interface. eth0:0, eth0:1, etc. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else? Update: This was happening more frequently but now it seems to be happening roughly every hour or every two hours. It's not consistent. I tried setting setting the link speed and duplex to match the switch port and that seemed to make it stop happening for a few hours but then it started again.

    Read the article

  • How can I get Haproxy to not log local requests?

    - by coneybeare
    I am trying to clean out some of the log clutter from my machines and am starting by removing requests that are generated from the server themselves. I have cache warmers running around the clock and I don't want these polluting the logs. I was able to get apache to stop logging local requests by adding a dontlog for the local IP: SetEnvIf Remote_Addr "RE\.DA\.CT\.ED" dontlog CustomLog "|logger -p local3.info -t http" combined env=!dontlog and now I am looking for something similar to put in a configuration for the Haproxy log. How can I prevent 127.0.0.1 requests from writing to the Haproxy log? UPDATE: 2/15/11 I use the excellent loggly service to pull out logs in the cloud, but I am seeing tons of logs like this: 2011 Feb 15 06:09:42.000 ip-10-251-194-96 http: RE.DA.CT.ED - - [15/Feb/2011:06:09:42 -0500] "HEAD /search/Nevad/predictive/txt HTTP/1.0" 200 - "-" "Wget/1.10.2 (Red Hat modified)" 2011 Feb 15 06:09:42.000 127.0.0.1 haproxy[10390]: 127.0.0.1:58408 [15/Feb/2011:06:09:42] www i-5dd7a331.0 0/0/0/8/8 200 210 - - --NI 0/0/0 0/0 "HEAD /search/Nevad/predictive/txt HTTP/1.1" and I want them gone. This question focuses on how to remove that haproxy log line from writing to the server side log in the first place.

    Read the article

  • Why is IIS 7.5 seeing some requests as HTTP/1.0?

    - by Zhaph - Ben Duguid
    While trying to work out why Static File Compression wasn't working on one of our IIS servers, the error was coming back as "NO_COMPRESSION_10" which translates to: Server not configured to compress 1.0 requests Looking at the requests in Fiddler, I can see that I'm requesting HTTP 1.1, but everything is being sent back as HTTP 1.0: Request (from chrome, captured via Fiddler): GET /css/reset.css HTTP/1.1 Host: [-----].com Connection: keep-alive Cache-Control: max-age=0 If-Modified-Since: Tue, 16 Oct 2012 15:04:34 GMT User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11 Accept: text/css,*/*;q=0.1 Referer: http://[-----].com/ Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en;q=0.8,en-US;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Response from IIS: HTTP/1.0 200 OK Cache-Control: no-cache, no-store Pragma: no-cache Content-Type: text/html; charset=utf-8 Expires: -1 Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Tue, 11 Dec 2012 11:57:03 GMT Connection: close Content-Length: 108837 Other servers with the same host that I'm running this site on all respond with HTTP/1.1. How can I persuade IIS to respond with HTTP/1.1 rather than HTTP/1.0? Edit to add: Digging deeper, I can see that some responses from the server are indeed being returned compressed, so I guess really I'm trying to work out why talking to this particular server from our office seems to result in it seeing 1.0 requests, while other servers at the same co-loc don't?

    Read the article

  • Requests are making it to my app server, but not into node.js -- why?

    - by Zane Claes
    I detailed in this question on StackOverflow how some random requests are not making it from the client to my Node.js app server, resulting in a gateway timeout. In summary, identical requests are, at random, not even making it far enough to trigger a console.log() in my first line of express middleware. I need to narrow down the problem, though, to find out WHERE the traffic is being lost and it was suggested that I try a packet sniffer on my app servers. Here's my setup: 2x Load Balancers (m1.larges) 2x node.js servers (also m1.large) Here's what's interesting/unusual: the node.js servers started as PHP servers with an Apache stack and continue to serve PHP files for my domain (streamified.me). However, I use a little httpd.conf magic on the app servers so that requests to api.streamified.me get routed over port 8888 to the node.js server: RewriteCond %{HTTP_HOST} ^api.streamified.me RewriteRule ^(.*) http://localhost:8888$1 [P] So, the request hits the load balancer = goes to an app server = gets routed to port 8888 if it's intended for the API = gets handled by node.js So, in the same httpd.conf file, I turned on RewriteLogLevel 5 and then created a simple PHP+CURL script on my localhost to hit my api.streamified.me with a random URL (which should cause node.js to trigger a simple "not found" response) until it resulted in a Gateway timeout. Here, you can see that it has happened -- and the rewrite log shows that the request was definitely received by the app server and forwarded to port 8888... but it was never received by node.js (or, at least, the first line of code in the first line of middleware never gets it...) Image Link: http://i.stack.imgur.com/3OQxS.png

    Read the article

  • Difference in performance: local machine VS amazon medium instance

    - by user644745
    I see a drastic difference in performance matrix when i run it with apache benchmark (ab) in my local machine VS production hosted in amazon medium instance. Same concurrent requests (5) and same total number of requests (111) has been run against both. Amazon has better memory than my local machine. But there are 2 CPUs in my local machine vs 1 CPU in m1.medium. My internet speed is very low at the moment, I am getting Transfer rate as 25.29KBps. How can I improve the performance ? Do not know how to interpret Connect, Processing, Waiting and total in ab output. Here is Localhost: Server Hostname: localhost Server Port: 9999 Document Path: / Document Length: 7631 bytes Concurrency Level: 5 Time taken for tests: 1.424 seconds Complete requests: 111 Failed requests: 102 (Connect: 0, Receive: 0, Length: 102, Exceptions: 0) Write errors: 0 Total transferred: 860808 bytes HTML transferred: 847155 bytes Requests per second: 77.95 [#/sec] (mean) Time per request: 64.148 [ms] (mean) Time per request: 12.830 [ms] (mean, across all concurrent requests) Transfer rate: 590.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 14 63 99.9 43 562 Waiting: 14 60 96.7 39 560 Total: 14 63 99.9 43 563 And this is production: Document Path: / Document Length: 7783 bytes Concurrency Level: 5 Time taken for tests: 33.883 seconds Complete requests: 111 Failed requests: 0 Write errors: 0 Total transferred: 877566 bytes HTML transferred: 863913 bytes Requests per second: 3.28 [#/sec] (mean) Time per request: 1526.258 [ms] (mean) Time per request: 305.252 [ms] (mean, across all concurrent requests) Transfer rate: 25.29 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 290 297 14.0 293 413 Processing: 897 1178 63.4 1176 1391 Waiting: 296 606 135.6 588 1171 Total: 1191 1475 66.0 1471 1684

    Read the article

  • Nginx Slower than Apache??

    - by ichilton
    Hi, I've just setup 2x identical Rackspace Cloud instances and am doing some comparisons and benchmarks to compare Apache and Nginx. I'm testing with a 3.4k png file and initially 512MB server instances but have now moved to 1024MB server instances. I'm very surprised to see that whatever I try, Apache seems to consistently outperform Nginx....what am I doing wrong? Nginx: Server Software: nginx/0.8.54 Server Port: 80 Document Length: 3400 bytes Concurrency Level: 100 Time taken for tests: 2.320 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 3612000 bytes HTML transferred: 3400000 bytes Requests per second: 431.01 [#/sec] (mean) Time per request: 232.014 [ms] (mean) Time per request: 2.320 [ms] (mean, across all concurrent requests) Transfer rate: 1520.31 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 11 15.7 3 120 Processing: 1 35 76.9 20 1674 Waiting: 1 31 73.0 19 1674 Total: 1 46 79.1 21 1693 Percentage of the requests served within a certain time (ms) 50% 21 66% 39 75% 40 80% 40 90% 98 95% 136 98% 269 99% 334 100% 1693 (longest request) And Apache: Server Software: Apache/2.2.16 Server Port: 80 Document Length: 3400 bytes Concurrency Level: 100 Time taken for tests: 1.346 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 3647000 bytes HTML transferred: 3400000 bytes Requests per second: 742.90 [#/sec] (mean) Time per request: 134.608 [ms] (mean) Time per request: 1.346 [ms] (mean, across all concurrent requests) Transfer rate: 2645.85 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 1 3.7 0 27 Processing: 0 3 6.2 1 29 Waiting: 0 2 5.0 1 29 Total: 1 4 7.0 1 29 Percentage of the requests served within a certain time (ms) 50% 1 66% 1 75% 1 80% 1 90% 17 95% 19 98% 26 99% 27 100% 29 (longest request) I'm currently using worker_processes 4; and worker_connections 1024; but i've tried and benchmarked different values and see the same behaviour on all - I just can't get it to perform as well as Apache and from what i've read previously, i'm shocked about this! Can anyone give any advice? Thanks, Ian

    Read the article

  • what is the main cause of 500 internal server error? [closed]

    - by Usman
    I want to know that I have hosted with a hosting company . My website gives "500 Internal server error many times" I have following Web server statistics :- Web Server Statistics Successful requests: 127,310 (7,504) Average successful requests per day: 814 (1,071) Successful requests for pages: 24,949 (1,309) Average successful requests for pages per day: 159 (186) Failed requests: 3,499 (58) Redirected requests: 10,091 (114) Distinct files requested: 5,791 (556) Distinct hosts served: 5,107 (330) Data transferred: 4.28 gigabytes (190.56 megabytes) Average data transferred per day: 28.03 megabytes (27.22 megabytes) Can you tell me my server condition by seeing this or i have to give another details. Thanks in advance

    Read the article

  • How can I transfer files to a Kindle Fire with a Micro-USB cable?

    - by Jeff
    I'm running Ubuntu 11.10, and when I connect my Kindle Fire to my computer via micro usb, it is not recognized automatically. Other usb devices, such as my ipod and digital camera, are recognized just fine. It does not appear to be a usb power issue, since the Kindle Fire wakes up from sleeping when it is plugged in. I never get the message on the Kindle telling me it is ready to accept files from the computer, though. Here are the last 15 lines of dmesg after plugging the kindle in: jeff@prime:~$ dmesg | tail -n 15 [45918.269671] ieee80211 phy0: wl_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) [45929.072149] wlan0: no IPv6 routers present [46743.224217] usb 1-1: new high speed USB device number 5 using ehci_hcd [46743.364623] scsi8 : usb-storage 1-1:1.0 [46744.366102] scsi 8:0:0:0: Direct-Access Amazon Kindle 0001 PQ: 0 ANSI: 2 [46744.366356] scsi: killing requests for dead queue [46744.372494] scsi: killing requests for dead queue [46744.384510] scsi: killing requests for dead queue [46744.392348] scsi: killing requests for dead queue [46744.392731] scsi: killing requests for dead queue [46744.396853] scsi: killing requests for dead queue [46744.397214] scsi: killing requests for dead queue [46744.400795] scsi: killing requests for dead queue [46744.401589] sd 8:0:0:0: Attached scsi generic sg2 type 0 [46744.407520] sd 8:0:0:0: [sdb] Attached SCSI removable disk And here are my mounted filesystems: jeff@prime:~$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 298594984 174663712 108763480 62% / udev 1407684 4 1407680 1% /dev tmpfs 566924 896 566028 1% /run none 5120 0 5120 0% /run/lock none 1417308 300 1417008 1% /run/shm /home/jeff/.Private 298594984 174663712 108763480 62% /home/jeff I should note that, since I got Dropbox working on my Kindle, the usb is no longer strictly necessary, but as a matter of principle I'd love to get it working.

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • Is it possible to load balance requests from a single source?

    - by Shawn
    In our application, Server A establishes a TCP connection with Server B, then it sends a large amount of requests to Server B over the TCP connection. The request message is XML-based. Server B needs to respond within a very short period, and it takes time to process the requests. So we hope a load balancer can be introduced and we can expedite the processing by using multiple Server B's. This is not a web application. I did some research but failed to find a similar application of load balancer. Can anyone tell me if there's a load balancer can help in our application?

    Read the article

  • Why would the IIS UrlRewrite module continue redirecting requests after the rule is removed?

    - by Jon Norton
    Our application uses the IIS UrlRewrite module to temporarily redirect requests during upgrades to a maintenance site. We have seen a few instances where even though the redirect rule has been removed, the server continues to redirect all requests according to the removed rule. There does not seem to be any pattern with this, and has only occurred once or twice. We have taken the following steps to try and determine the cause of this behavior. Verified that the original rule was a 307 temporary redirect Requested the application from machines that had never requested it before Used a different browser Added and removed a dummy rule from the IIS management console Checked the http kernel cache using netsh http show cachestate Modified the applicationHost.config file by hand (the rule was not still in the file, we just added a superfluous space) In the past when this has happened, we have been able to restart IIS and it solves the problem but that is not always an option and we really want to figure out what the root cause is. How or why would UrlRewrite be caching the response and not responding to configuration changes?

    Read the article

  • Redirect HTTP requests based on subdomain address without changing accessed URL?

    - by tputkonen
    Let's say I have a domain: www.mydomain.com And I ordered a new domain: abc.newdomain.com Both domains are hosted in the same ISP, so currently requests to either of those addresses result in the same page being shown. I want to redirect all requests to abc.newdomain.com to folder /wp so that when users access abc.newdomain.com they would see whatever is inside folder /wp without seeing the URL change. Questions: 1) How can I achieve this using .htaccess? 2) How can I prevent users from accessing directly /wp directory (meaning that www.mydomain.com/wp would be blocked)?

    Read the article

  • Redirect HTTP requests based on subdomain address without changing accessed URL?

    - by tputkonen
    Let's say I have a domain, www.mydomain.com. And I ordered a new domain, abc.newdomain.com. Both domains are hosted in the same ISP, so currently requests to either of those addresses result in the same page being shown. I want to redirect all requests to abc.newdomain.com to folder /wp so that when users access abc.newdomain.com they would see whatever is inside folder /wp without seeing the URL change. Questions: How can I achieve this using .htaccess? How can I prevent users from accessing directly /wp directory (meaning that www.mydomain.com/wp would be blocked)?

    Read the article

  • How do i find out what's preventing delete requests from working in iis7.5 and iis8?

    - by Simon
    Our site has an MVC Rest API. Recently, both the live servers and my development machine stopped accepting DELETE requests, instead returning a 501 Not Implemented response. On my development machine, which is Windows 7 running IIS7.5, the solution was to add these lines to our Web.config, under system.webServer / handlers: <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> ... <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE" type="System.Web.Handlers.TransferRequestHandler" resourceType="Unspecified" requireAccess="Script" preCondition="integratedMode,runtimeVersionv4.0" /> However, this didn't work on any of our live servers; not on Sever 2008 + IIS7.5 and not on Server 2012 + IIS8. There are no verbs set up in Request Filtering, and WebDAV is not installed on any of our live servers. The error page gives no further information, and nothing gets recorded in the logs. How do I find out what's preventing DELETE requests from working in iis7.5 and iis8?

    Read the article

  • How do I get Outlook (via Exchange) to accept Thunderbird/Lightning meeting requests?

    - by user39646
    Lightning/1.0b1 addon to Thunderbird/3.0.4 has no problem accepting Meeting Requests sent from my network Outlook session. However, Meeting Requests sent to an email address hosted on a POP server and to be delivered to my Outlook mailbox never seem to arrive in any fashion. Nothing in my Outlook Inbox or Messages and nothing on my calendar or anything. I was expecting at least a std email, perhaps with an *ics attachment file, to arrive just like regular Thunderbird-originated email does, but no dice. Any ideas on what I'm doing wrong?

    Read the article

  • Simple HTTP server that will send the same file for all requests?

    - by Rory McCann
    I need to debug a XML-RPC application, which sends XML replies over HTTP. I have a sample XML reply (i.e. data from the server, sent to the client that isn't working), I'd like to debug my application. Ideally I'd like a simple HTTP server that will serve one file in reply to all requests. Someone requests /? Send them this file. Someone makes a post to /server/page.php with a certain cookie? Just send them this file. I don't care about multithreading, or security. I will only need to use this for a few hours to debug. I have root on the machine. i.e. I'm hoping there's something as easy to use as this: simple_http_server -p 12445 -f my_test_file I'm aware of python's SimpleHTTPServer module, but I'm not sure how to make it work in this case.

    Read the article

  • What PowerShell/WSMan clients or queries are consuming more than 1000 requests per 2 seconds?

    - by makerofthings7
    Exchange 2010 remote administration tools are complaining with the following error [txexmb02.ibm.com] Connecting to remote server failed with the following error message : The WS-Management service cannot process the request. The system load quota of 1000 requests per 2 seconds has been exceeded. Send future requests at a slower rate or raise the system quota. The next request from this user will not be approved for at least 558475776 milliseconds. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionOpenFailed VERBOSE: Connecting to TXEXHC02.ibm.com The help document this error referrers to says this is a WS-Man error. We're running SCOM 2007 R2 and am thinking that is increasing the query count, but I need to prove it.

    Read the article

  • How does Apache process several requests at a time?

    - by Vicenç Gascó
    In a short question: If 10 requests hit Apache, does it process them one by one, so when R3 finishes, then it starts to run R4, or does it fire 10 processes/threads/whatever and are resolved simultaneously? Now some background: I have a PHP script that takes up to two minutes to do some processes. My question is: while a client is waiting for this 2 minutes, all the other clients requests are being processed? Or also waiting for this one to end? By the way, if there are simultaneous request, how can I handle them? Let's say put a limit on them. Or a limit on resources consumed. For instance I want the server to use its 80% performance on serving the webapp, and just a 20% for those long operations ... because I have no hurry to end them. Doesn't know if it cares but is all in PHP. Thanks in advance!!

    Read the article

  • g-wan - reproducing the performance claims

    - by user2603628
    Using gwan_linux64-bit.tar.bz2 under Ubuntu 12.04 LTS unpacking and running gwan then pointing wrk at it (using a null file null.html) wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1:8080/null.html Running 20s test @ http://127.0.0.1:8080/null.html 2 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 11.65s 5.10s 13.89s 83.91% Req/Sec 3.33k 3.65k 12.33k 75.19% 125067 requests in 20.01s, 32.08MB read Socket errors: connect 0, read 37, write 0, timeout 49 Requests/sec: 6251.46 Transfer/sec: 1.60MB .. very poor performance, in fact there seems to be some kind of huge latency issue. During the test gwan is 200% busy and wrk is 67% busy. Pointing at nginx, wrk is 200% busy and nginx is 45% busy: wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1/null.html Thread Stats Avg Stdev Max +/- Stdev Latency 371.81us 134.05us 24.04ms 91.26% Req/Sec 72.75k 7.38k 109.22k 68.21% 2740883 requests in 20.00s, 540.95MB read Requests/sec: 137046.70 Transfer/sec: 27.05MB Pointing weighttpd at nginx gives even faster results: /usr/local/bin/weighttp -k -n 2000000 -c 500 -t 3 http://127.0.0.1/null.html weighttp - a lightweight and simple webserver benchmarking tool starting benchmark... spawning thread #1: 167 concurrent requests, 666667 total requests spawning thread #2: 167 concurrent requests, 666667 total requests spawning thread #3: 166 concurrent requests, 666666 total requests progress: 9% done progress: 19% done progress: 29% done progress: 39% done progress: 49% done progress: 59% done progress: 69% done progress: 79% done progress: 89% done progress: 99% done finished in 7 sec, 13 millisec and 293 microsec, 285172 req/s, 57633 kbyte/s requests: 2000000 total, 2000000 started, 2000000 done, 2000000 succeeded, 0 failed, 0 errored status codes: 2000000 2xx, 0 3xx, 0 4xx, 0 5xx traffic: 413901205 bytes total, 413901205 bytes http, 0 bytes data The server is a virtual 8 core dedicated server (bare metal), under KVM Where do I start looking to identify the problem gwan is having on this platform ? I have tested lighttpd, nginx and node.js on this same OS, and the results are all as one would expect. The server has been tuned in the usual way with expanded ephemeral ports, increased ulimits, adjusted time wait recycling etc.

    Read the article

  • How to reduce the number of cert validation requests... (IE is killing me slowly)

    - by scooterhanson
    On a customer's internal network, I can make a request to my SSL site using IE6 SP1 (on Win2K) and see one cert validation requests, but if I use IE6 SP2 (on XP) 13 separate cert validation requests get fired off. Needless to say, this slows down my page load a lot. Firefox loads the page just fine with no unnecessary cert validation requests. The server is Apache running a pretty new lampp stack. All the server certificate / CA chain configurations seem to be fine (users can authenticate w/ trusted certs, the system can communicate to other systems with that server cert, etc.) Is there anything I can do from a configuration standpoint? Any other ideas at all?

    Read the article

  • Is it possible to tell if there are any ongoing 'GET' requests with javascript?

    - by Lavabeams
    Is it possible to tell if there are any ongoing 'GET' requests with javascript? I have a feeling that it is not. Basically I don't want to make a seperate request while the other "more important" requests are going as this one is fairly heavy. So I was curious if it is possible to tell if there are currently 'get' requests going and if so I can tell my function to hold off for this update and do it again in 10-15secs. Any information etc would be appreciated.

    Read the article

  • Does @import in CSS result in additional http requests?

    - by Mo Boho
    I have an ecommerce site that has about 8 CSS files linked from the header - resulting in 8 separate http requests to the server. I consolidated all the CSS files into 1 big one, resulting in a 67kb (!) file - to cut down the http requests to 1 for our css files. I'm finding this size a CSS file a little unmanageable in light of the fact I'm performing updates on the site constantly. My concern is my users may catch me in the middle of updating and see a NON-styled page when moving from page to page - b/c 67kb still takes a good 2-3 seconds before it is successfully placed on the remote server via FTP. My question is: does the use of @import within this large CSS file to break up the files into smaller more manageable sizes (within that CSS file) take us back to the original 8 http-requests when the pages is loaded? Or are @imports in CSS handle differently somehow?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >