Search Results

Search found 55652 results on 2227 pages for 'http response'.

Page 114/2227 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Default http/admin port in dropwizard project

    - by mithrandir
    I have a dropwizard project and I have maintained a config.yml file at the ROOT of the project (basically at the same level as pom.xml). Here I have specified the HTTP port to be used as follows: http: port:9090 adminPort:9091 I have the following code in my TestService.java file public class TestService extends Service<TestConfiguration> { @Override public void initialize(Bootstrap<TestConfiguration> bootstrap) { bootstrap.setName("test"); } @Override public void run(TestConfiguration config, Environment env) throws Exception { // initialize some resources here.. } public static void main(String[] args) throws Exception { new TestService().run(new String[] { "server" }); } } I expect the config.yml file to be used to determine the HTTP port. However the app always seems to start with the default ports 8080 and 8081. Also note that I am running this from eclipse. Any insights as to what am I doing wrong here ?

    Read the article

  • DNS lookup failures while accessing my website some proxy error

    - by Bond
    Here is a situation until today morning,every thing has been working perfectly fine with me. From past 6 months many of my domains wer accessible as http://site1.myserver.com http://site2.myserver.com http://site3.myserver.com http://site4.myserver.com All these were Reverse Proxy configurations. I have some applications on each of them. until today morning some people reported me that http://site1.myserver.com/app1 is not working but http://site1.myserver.com is accessible but http://site2.myserver.com is accessible but http://site3.myserver.com is accessible but http://site4.myserver.com not accessible In past 6 months I have not changed any of these Apache configurations (things were working perfectly so) The error which can be seen in browser are while accessing http://site1.myserver.com/app1 Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /app1. Reason: DNS lookup failure for: myserver.com and same is the error for http://site4.myserver.com So what should I check in I have checked all the apache logs to an extent which I could see and 192.168.1.25 - - [10/Jan/2011:14:50:48 +0530] "GET /app1 HTTP/1.1" 502 531 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3" Mon Jan 10 14:27:42 2011] [error] (113)No route to host: proxy: HTTP: attempt to connect to 192.168.1.3:80 (192.168.1.3) failed [Mon Jan 10 14:27:42 2011] [error] ap_proxy_connect_backend disabling worker for (192.168.1.3) [Mon Jan 10 14:27:44 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:44 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:44 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:46 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:47 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:48 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:48 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:48 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:35:29 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:35:30 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:35:30 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:50:30 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:50:48 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 and for site4.myserver.com I get [Mon Jan 10 14:57:40 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 14:57:40 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 14:57:43 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 15:02:38 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by / [Mon Jan 10 15:03:04 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:03:04 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 15:03:08 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:03:08 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 15:03:10 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:06:21 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by / [Mon Jan 10 15:06:31 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:26:03 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /

    Read the article

  • cURL hangs trying to upload file from stdin

    - by SidneySM
    I'm trying to PUT a file with cURL. This hangs: curl -vvv --digest -u user -T - https://example.com/file.txt < file This does not: curl -vvv --digest -u user -T file https://example.com/file.txt What's going on? * About to connect() to example.com port 443 (#0) * Trying 0.0.0.0... connected * Connected to example.com (0.0.0.0) port 443 (#0) * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DHE-RSA-AES256-SHA * Server certificate: * subject: serialNumber=jJakwdOewDicmqzIorLkKSiwuqfnzxF/, C=US, O=*.example.com, OU=GT01234567, OU=See www.example.com/resources/cps (c)10, OU=Domain Control Validated - ExampleSSL(R), CN=*.example.com * start date: 2010-01-26 07:06:33 GMT * expire date: 2011-01-28 11:22:07 GMT * common name: *.example.com (matched) * issuer: C=US, O=Equifax, OU=Equifax Secure Certificate Authority * SSL certificate verify ok. * Server auth using Digest with user 'user' > PUT /file.txt HTTP/1.1 > User-Agent: curl/7.19.4 (universal-apple-darwin10.0) libcurl/7.19.4 OpenSSL/0.9.8l zlib/1.2.3 > Host: example.com > Accept: */* > Transfer-Encoding: chunked > Expect: 100-continue > < HTTP/1.1 100 Continue

    Read the article

  • How is a subdomain passed to the webserver?

    - by Joshua Frank
    I know that dns resolves an address like example.com to an IP address like 11.22.33.44, but I'm a little confused about how subdomains are resolved, so that when you type http://subdomain.example.com, what actually gets passed to the server at 11.22.33.44? In other words, example.com = 11.22.33.44, but subdomain.example.com/path = ??? Are "subdomain" and "path" passed as http headers, or mapped in the url in some way, or what? Thanks in advance. Edit: If I'm understanding correctly, BloodPhilia says that subdomain.example.com actually is a different domain that in principle could resolve to a totally different IP. But if that's so, then what about hosts that have huge numbers of (what look like) subdomains, but which actually map to some path on the site. For instance, blogspot hosts millions of blogs, and they all look like this: aaa.blogspot.com bbb.blogspot.com ...millions more... yyy.blogspot.com zzz.blogspot.com Those are clearly not subdomains with their own IP's, but rather some mapping like aaa.blogspot.com -- www.blogspot.com/aaa, but how is this accomplished? What actually gets passed to the web server at blogspot.com?

    Read the article

  • add_header directives in location overwriting add_header directives in server

    - by user64204
    Using nginx 1.2.1 I am able to add multiple headers using add_header as follows: server { listen 80; server_name localhost; root /var/www; add_header Name1 Value1; <=== HERE add_header Name2 Value2; <=== HERE location / { echo "Nginx localhost site"; } } GET / HTTP/1.1 200 OK Name1: Value1 Name2: Value2 However I soon as I use the add_header directive inside location, the other add_header directives under server are ignored server { listen 80; server_name localhost; root /var/www; add_header Name1 Value1; <=== HERE add_header Name2 Value2; <=== HERE location / { add_header Name3 Value3; <=== HERE add_header Name4 Value4; <=== HERE echo "Nginx localhost site"; } } GET / HTTP/1.1 200 OK Name3: Value3 Name4: Value4 The documentation says that both server and location are valid context and doesn't state that using add_header in one prevents using it in the other. Q1: Do you know if this is a bug or the intended behaviour and why? Q2: Do you see other options to get this fixed than using the HttpHeadersMoreModule module?

    Read the article

  • Enable gzip on Nginx

    - by Rob Wilkerson
    Yes, I know that there are a lot of other questions that seem exactly like this out there. I think I must've looked all of them. Twice. In desparation, I'm adding another in case my specific configuration is the issue. Bear with me. First, the question: What do I need to do to get gzip compression to work? I have an Ubuntu 12.04 server installed running nginx 1.1.19. Nginx was installed with the following packages: nginx nginx-common nginx-full The http block of my nginx.conf looks like this: http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Both PageSpeed and YSlow are reporting that I need to enable compression. I can see that the request headers indicate Accept-Encoding:gzip,deflate,sdch, but the response headers do not have the corollary Content-Encoding header. I've tried various other config values (gzip_vary on, gzip_http_version 1.0, etc.), but no joy. As far as I know, I can only assume that nginx was compiled with compression support, but if there's any way to verify that, I'd love to know. If anyone sees anything I'm missing or can suggest further debugging, please let me know. I'm no sysadmin and I'm new to Nginx so I've exhausted everything I can think of or have read. Thanks.

    Read the article

  • How to use basic auth for single file in otherwise forbidden Apache directory?

    - by mit
    I want to allow access to a single file in a directory that is otherwise forbidden. This did not work: <VirtualHost 10.10.10.10:80> ServerName example.com DocumentRoot /var/www/html <Directory /var/www/html> Options FollowSymLinks AllowOverride None order allow,deny allow from all </Directory> # disallow the admin directory: <Directory /var/www/html/admin> order allow,deny deny from all </Directory> # but allow this single file:: <Files /var/www/html/admin/allowed.php> AuthType basic AuthName "private area" AuthUserFile /home/webroot/.htusers Require user admin1 </Files> ... </VirtualHost> When I visit http://example.com/admin/allowed.php I get the Forbidden message of the http://example.com/admin/ directory. How can I make an exception for allowed.php? If not possible, maybe I could enumerate all forbidden files in another Files directive? Let's say admin/ contains also user.php and admin.php which should be forbidden in this virtual host.

    Read the article

  • nginx with ssl: I get a 403 and log "directory index of '...dir...' is forbidden" log message. works fine with unencrypted connection

    - by user72464
    As mentioned in the title, I had nginx working fine with my rails app, until I tried to add the ssl server. The unencrypted connection still works but the ssl always returns me a 403 page with the following line in the error log: directory index of "/home/user/rails/" is forbidden, client: [my ip], server: _, request: "GET / HTTP/1.1", host: "[server ip]" Below my nginx.conf server block: server { listen 80; listen 443 ssl; ssl_certificate /etc/ssl/server.crt; ssl_certificate_key /etc/ssl/server.key; client_max_body_size 4G; keepalive_timeout 5; root /home/user/rails; try_files $uri/index.html $uri.html $uri @app; location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://0.0.0.0:8080; } error_page 500 502 503 504 /500.html; location = /500.html { root /home/user/rails; } } the /home/user/rails directory and it's parent have all read to all rights. and they belong to the user nginx. the certificate and key file have the following rights: -rw-r--r-- 1 nginx root 830 Nov 8 09:09 server.crt -rw--w---- 1 nginx root 887 Nov 8 09:09 server.key any clue?

    Read the article

  • Trouble with port 80 nating (XenServer to WebServer VM)

    - by Lain92
    I have a rent server running XenServer 6.2 I only have 1 public IP so i did some NAT to redirect ports 22 and 80 to my WebServer VM. I have a problem with the port 80 redirection. When i use this redirection, i can get in the WebServer's Apache but this server lose Web access. I get this kind of error : W: Failed to fetch http://http.debian.net/debian/dists/wheezy/main/source/Sources 404 Not Found [IP: 46.4.205.44 80] but i can ping anywhere. XenserverIP:80 redirected to 10.0.0.2:80 (WebServer). This is the port 80 redirection part of my XenServer iptables : -A PREROUTING -i xenbr1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.0 .2:80 -A INPUT -i xenbr1 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT COMMIT What is wrong in my configuration? Is there a problem with XenServer? Thanks for your help ! Edit : Here is my iptables full content : *nat :PREROUTING ACCEPT [51:4060] :POSTROUTING ACCEPT [9:588] :OUTPUT ACCEPT [9:588] -A PREROUTING -p tcp -m tcp --dport 1234 -j DNAT --to-destination 10.0.0.2:22 -A PREROUTING -i xenbr1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.0 .2:80 -A POSTROUTING -s 10.0.0.0/255.255.255.0 -j MASQUERADE COMMIT *filter :INPUT ACCEPT [5434:4284996] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [5014:6004729] -A INPUT -i xenbr1 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT COMMIT Update : I have a second server with 10.0.0.3 as IP and it has the same problem that 10.0.0.2 has.

    Read the article

  • .htaccess issue on Apache Web Server in Ubuntu VM

    - by Neon Flash
    I just installed Apache Web Server on Ubuntu 11.04 in VMWare Workstation. I created a basic HTML page, named it index.html and placed it in /var/www directory (document root). I am able to access this web page from my Host OS (Windows 7), by pointing the browser to: http://192.168.2.2/index.html where, 192.168.2.2 is the IP Address of the Ubuntu VM. Next, to test various configurations of .htaccess files, I created a new directory in /var/www called, members. Inside this directory, I created and placed a .htaccess file with the following configuration: AuthUserFile /www/Neon/auth/.htpasswd AuthName "neon's home" AuthType Basic require valid-user IndexIgnore */* I created a directory path like /var/www/Neon/auth/ and then placed a .htpasswd file inside it. To place the username and hash inside the .htpasswd file: I created a username "neon" and calculated the DES hash of a password and placed it inside .htpasswd file in format: username:hash Now, when I try to access the web page: http://192.168.2.2/members/ It does not prompt me to enter the username and password with a popup box. Instead it just displays the index.html which is placed inside members directory. I would like to get this configuration working :)

    Read the article

  • How to tell nginx to honor backend's cache?

    - by ChocoDeveloper
    I'm using php-fpm with nginx as http server (I don't know much about reverse proxies, I just installed it and didn't touch anything), without Apache nor Varnish. I need nginx to understand and honor the http headers I send. I tried with this config (taken from the docs) but didn't work: /etc/nginx/nginx.conf: fastcgi_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=website:10m inactive=10m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; /etc/nginx/sites-available/website: server { fastcgi_cache website; #fastcgi_cache_valid 200 302 1h; #fastcgi_cache_valid 301 1d; #fastcgi_cache_valid any 1m; #fastcgi_cache_min_uses 1; #fastcgi_cache_use_stale error timeout invalid_header http_503; add_header X-Cache $upstream_cache_status; } I always get "MISS" and the cache dir is empty. If I uncomment the other directives, I get hit, but I don't want those "dumb" settings, I need to control them within my backend. For example, if my backend says "public, s-maxage=10", the cache should be considered stale after 10 secs. Instead, nginx will store it for 1h, because of these directives. I was thinking whether I should try proxy_cache, not sure what's the difference. In both fastcgi and proxy modules docs it says this: The cache honors backend's Cache-Control, Expires, and etc. since version 0.7.48, Cache-Control: private and no-store only since 0.7.66, though. Vary handling is not implemented. nginx version: nginx/1.1.19 Any thoughts? pd: I also have the reverse proxy that is offered by Symfony2 (which I turn off to use nginx's). The headers are interpreted correctly by it, so I think I'm doing it right.

    Read the article

  • Apache intermittently aborting requests

    - by Adam Phillips
    I have just been dealing with a problem whereby http requests are being aborted, seemingly at random. On any particular page in the website, when you opened a page, a number of the assets (img, css, etc) failed to load. If you refreshed, the page may work fine, the same set of assets may fail to load or different assets may fail to load. The net tab in firefox was returning 'Aborted' in the HTTP status code column for the failed assets, even tho in the case of images, the image previews were still working. There was nothing in any of the apache logs about the requests that failed, however since it seemed to point to an apache issue, we restarted apache. The first time we tried, it made no difference but about 10 minutes later, in the absence of a better solution we tried again. Bizarrely, the problem disappeared immeadiately. So now the site seems to be running fine again but its rather unsettling, both the intermittent nature of the problem and the lack of an explanation for its resolution. Has anyone seen anything like this before and if so did you find out the reason behind it? Many Thanks

    Read the article

  • I need help with 2D collision response (of stacking rotating polygons, with friction and gravity, for a game)

    - by Register Sole
    Hi I am looking for suggestions on how to write a collision response for game programming purpose (so not a scientific simulation). I am dealing with 2D polygons that are rotating, and I want them to be able to stack. I also want friction and gravity. I have a detection mechanism that returns the separating axis, how long the polygons are overlapping, and up to 2 points of contact. For the response, I am currently using an impulse-based response, which main idea is: find the separating axis, length of overlap, and the point of contact (if there are two, pick a random point between to simulate averaged force. i believe there are better ways than this) separate the object (modifying their positions, taking into account of their masses. i do not separate them completely though, to keep track that they are colliding to reduce jitter) calculate normal force based on the coefficient of restitution as if there is no friction. calculate friction, as if there is no normal force. I also assume that the direction of the friction is the same throughout the collision. apply the two forces (which result in a rather inaccurate result, since each force is calculated as if the other is not present. for non-rotating bodies though, this method is exact) I am aware that this method requires the coefficient of friction to be sufficiently small due to the assumption that the direction of friction stays the same in a collision. Also, the result is visually satisfying if gravity is not present. However, when there is gravity, objects on ground jitter and drift (even with zero coefficient of restitution)! It also happens for stacking objects. Larger coefficient of restitution and gravity increase the jittering. I hope you can help me with this. Some things i would like to know more about is how to handle collision with two point of contacts (how to end up having an object sitting still on the ground?), how to reduce, and prevent if possible, jitter and drift (do people use the most accurate method possible, or is there a trick to overcome this?), and how to handle multiple objects collision (for example, in the case of stacking objects, how do I check collisions between all of them and keep them all stable at every frame so they don't jitter?). A total reformulation of my algorithm is also welcomed, as long as it works. Another thing to note is that I am not making a Physics game, so I only need a visually satisfying response (though a realistic response is preferable, if it is not performance-heavy). But surely jittering and drifting objects on flat ground are not at all acceptable. In addition, I am a Physics student, so feel free to talk about impulse and whatever needed. Finally, I'm sorry for the long post. I tried to be as concise as I can. Thank you for reading it! EDIT It seems what I didn't manage to come up all this time is to separate resting contact as a class of its own and how to solve them. Currently reading the paper suggested by Jedediah. More suggestions on the topic are welcome :) CASE CLOSED After reading various papers referenced in the paper, successfully implemented simultaneous impulse method (referring to the original paper by Erin Catto, [Catt05]). Thanks maaaan!! The paper is wonderful. The current system is visibly much better than the previous. Still haven't separated resting contact as a class of its own though, which brings me to my next question. Love you all! Haha (sorry, I'm just so happy thanks to you).

    Read the article

  • Redirect as a backup trick? w/o modifying DNS?

    - by acidzombie24
    I specifically looked up how to do something like this ( Can you set a backup ip for your server in DNS? ) and the answer basically was you can't. If i say specify 2 ip addresses could i somehow use a HTTP response header to ignore it temporary (say 5mins) and go to the other IP address? Or maybe i can play dead however i'm unsure how to play dead using nginx. I then would like to be available after my box notice the other box is down and be some kind of readonly server. I'm sure something like this has been implemented i am just wondering how i might implement it with 2 boxes. I'm sure it isn't very difficult? How might i redirect traffic from a backup box to my main server without modifying the DNS?

    Read the article

  • Virtualbox two networks slow

    - by Petr Marek
    I am running an Ubuntu server guest on Win 7 guest, and am running a webrick server (RoR dev). If I have just a host-only network, everything works fine and the browser response is instant. However, if I add a second network (NAT), so that the server can connect to the internet (for various updates etc.), the host-to-guest access gets really slow. I can't use the bridge connection. I am using the port 3000 (RoR Webrick server) and connecting to the guest via internet browser on this port (eg http://192.168.56.102:3000). Any idea, what could be causing this? If I ping the IP from host console, I get < 0ms. Here are the settings (relevant info is in english; Povoleno vše is Everything is allowed):

    Read the article

  • IIS7 rejecting POST requests with 400 error.

    - by Eli
    I have a web application that is supposed to handle post requests from SAP. This has been working fine at other customers with win2k3 systems (IIS6) and win2k8 (IIS7) systems. However, on this specific customer's site, IIS responds with a 400 response, without calling my aspx page. In fact, I don't even see it appear in the w3c log for the virtual directory. I do see the request using Network Monitor, so I know no firewalls and the like are eating the request, and as far as I can tell, all of the fields of the request are valid (there is "content-length", it looks correct (this is a sending of a 28K tiff file - which isn't MIME encoded, curiously enough now that I think of it...) Ideas?

    Read the article

  • Lots of 408 Request Timed Out from same IPs

    - by GreatFire
    Web server: Nginx. Checking our log files, there are many log entries of connections that: take 59-61 seconds send an empty request (or at least none is logged) result in a 408 response (request timed out) do not contain any http_user_agent originate from a limited number of IPs We are monitoring average times to serve responses and this obviously inflates our statistics. Apart from that though, is this a problem? Any idea why it is occurring? Does it suggest that somebody is intentionally messing with us? What should we do?

    Read the article

  • Set maximum requests per IP in IIS7

    - by Maxim V. Pavlov
    I have a web site deployed to IIS 7. One page it is has 15+ .js files linked to it. Last two files referenced in <head> tag (loaded last) get 403 forbidden response from server. I have enabled FailedRequestTracing and have been able to see a detailed error code which is 403.502. I suppose over a very short period of time I am just pulling to much and the IIS blocks me. Is there a way I can configure the limit to enable larger number of requests and get rid of 403.502 error?

    Read the article

  • Varnish doesn't seem to be caching

    - by Charlie Somerville
    I've setup a Varnish cache mirror to sit in front of a file server, but it seems to be endlessly re-downloading data from my file server. There's about 100GB of data in total, but so far Varnish has downloaded 800GB from my file server. I'm using the default VCL file that comes with Varnish and the response headers for files served by the file server are similar to the following: HTTP/1.1 200 OK Cache-Control: max-age=290304000, public Content-Type: image/jpeg Expires: Wed, 29 Dec 2010 21:38:33 GMT Server: Microsoft-IIS/7.0 E-Tag: "8b4723296ab697530768f18b1378b269" Content-Disposition: inline; filename=image046.jpg; X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Thu, 23 Dec 2010 05:38:33 GMT Content-Length: 100592 I'm starting varnishd with the following options: varnish/sbin/varnishd -a 0.0.0.0:80 -f varnish/etc/varnish/default.vcl -s file,varnish/var/lib/varnish/varnish_storage.bin,100G

    Read the article

  • Circumventing a manual HTML login page for "unclassified" websites

    - by auramo
    The IT department just made my life a little bit harder again: they introduced a manual HTML login page for all websites they have not "classified". This means that all the applications which try to access unclassified websites for e.g. downloading plugins do not work. Examples: Eclipse plugin installation, Maven builds etc. What would be the easiest workaround for this? The best I've come up with is try to extend/customize Ruby's httpproxy.rb that comes with Webrick. I would automate the manual login process whenever that login response page is detected. This sounds quite painful, and I think there might/should be simpler options?

    Read the article

  • How to set up custom 401 error page or redirect in WSS3 SP2

    - by Stacy Vicknair
    I've got a WSS3 sharepoint site that requires windows authentication both in IIS and via the Sharepoint site. What I would like to do is in the case that a user does not provide valid AD credentials they are redirected to a custom error page. Currently, if the user immediately hits cancel when prompthey will see a plain text response of "401 UNAUTHORIZED". If they make an attempt and then hit cancel they instead see a blank page. I have looked into several options such as customErrors, httpModule interception (only saw examples for this after the user is authenticated), IIS Url rewrites (didn't see how this could help). Is there a good way to do this?

    Read the article

  • Broadband Traffic Question

    - by rutherford
    I have a broadband ADSL line with plus.net in the UK. Having checked the modem there is no firewall or any weird features enabled. But since I arrived at the apartment (the broadband already being installed), I cannot log into Twitter nor update any of my wordpress blogs (I can browse them and log in, but cannot save any edits or new posts). It only seems to affect these two sites in their unique ways. If I take the netbook I use in this place out to say a McDonalds or some other wifi access point then these sites work fine again. Anyone know what could possibly be preventing access of the pages in question? The only thing common to these pages are the POST response they are expecting. But POST form submission works fine on other sites...

    Read the article

  • Circumventing a manual HTML login page for "unclassified" websites (automation purposes, credentials

    - by auramo
    The IT department just made my life a little bit harder again: they introduced a manual HTML login page for all websites they have not "classified". This means that all the applications which try to access unclassified websites for e.g. downloading plugins do not work. Examples: Eclipse plugin installation, Maven builds etc. What would be the easiest workaround for this? The best I've come up with is try to extend/customize Ruby's httpproxy.rb that comes with Webrick. I would automate the manual login process whenever that login response page is detected. This sounds quite painful, and I think there might/should be simpler options?

    Read the article

  • Chrome sending of packets to random destinations upon reconnect/disconnect

    - by f0x
    Noticed an interesting thing whilst debugging one of my websocket applications that Google Chrome will push out 3 http requests upon a network connection status changing; Quite disconcerting and looks almost as if some malware is checking out to a random server. I don't quite understand the why though since they all return a 502 or have no response code at all since the destination does not exist. On Disconnect: Reconnect: I guess the main question is this normal and what the use is; howcome they wouldn't go for a dns lookup that actually exists?

    Read the article

  • Differences when Running with OutputCache managed module under ASP.NET IIS7.x with Cache-control header

    - by Shawn Cicoria
    This post is to report some differences when using MVC or IHttpHandlers if you’re attempting to set the Cache-control : max-age or s-maxage value under IIS7.x using the HttpResponse.Cache methods. [UPDATE]: 2011-3-14 – The missing piece was calling  Response.Cache.SetSlidingExpiration(true) as follows: context.Response.Cache.SetCacheability(HttpCacheability.Public); context.Response.Cache.SetMaxAge(TimeSpan.FromMinutes(5)); context.Response.ContentType = "image/jpeg"; context.Response.Cache.SetSlidingExpiration(true);   Under IIS7.x if you us one of the following 2 methods, you will only get a Cache-ability of “public”.  public ActionResult Image2() { MemoryStream oStream = new MemoryStream(); using (Bitmap obmp = ImageUtil.RenderImage("Respone.Cache.Setxx calls", 5, DateTime.Now)) { obmp.Save(oStream, ImageFormat.Jpeg); oStream.Position = 0; Response.Cache.SetCacheability(HttpCacheability.Public); Response.Cache.SetMaxAge(TimeSpan.FromMinutes(5)); return new FileStreamResult(oStream, "image/jpeg"); } } Method 2 – which is just a plain old HttpHandler and really isn’t MVC3, but under the same MVC ASP.NET application, same result. public class image : IHttpHandler { public void ProcessRequest(HttpContext context) { using (var image = ImageUtil.RenderImage("called from IHttpHandler direct", 5, DateTime.Now)) { context.Response.Cache.SetCacheability(HttpCacheability.Public); context.Response.Cache.SetMaxAge(TimeSpan.FromMinutes(5)); context.Response.ContentType = "image/jpeg"; image.Save(context.Response.OutputStream, ImageFormat.Jpeg); } } } Using the following under MVC3 (I haven’t tried under earlier versions) will work by applying the OutputCacheAttribute to your Action: [OutputCache(Location = OutputCacheLocation.Any, Duration = 300)] public ActionResult Image1() { MemoryStream oStream = new MemoryStream(); using (Bitmap obmp = ImageUtil.RenderImage("called with OutputCacheAttribute", 5, DateTime.Now)) { obmp.Save(oStream, ImageFormat.Jpeg); oStream.Position = 0; return new FileStreamResult(oStream, "image/jpeg"); } } To remove the “OutputCache” module, you use the following in your web.config: <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules runAllManagedModulesForAllRequests="true"> <!--<remove name="OutputCache"/>--> </modules>

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >