Search Results

Search found 50839 results on 2034 pages for 'http 404'.

Page 100/2034 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • zero-config CGI enabled web server

    - by halp
    To serve static content of a directory over http, one can simply navigate to that directory and type: python -m SimpleHTTPServer 11111 which will start a http server on port 11111. This hack is nice because it requires zero-config: no stand-alone web server, no config files at all. Is it possible to extend this example, or have an alternate way to achieve this goal, but also have CGI support? The final goal is to have a quick and lazy way of serving a web site from a certain directory. The site has static content (HTML pages, images), but also a CGI script. The CGI script must work properly when accessed via browser. Of course I could setup a virtual host in apache, allow CGI inside it etc. But that's not a zero-config approach.

    Read the article

  • archiva/jetty with nginx ssl proxy: getting http responses

    - by numb3rs1x
    I've been banging my head against this for awhile now. I have an archiva repository server I'm trying to proxy through nginx with ssl offloading. archiva has a jetty server built in that is listening on port 8008 of the localhost. I'm able to get to the archiva server through the proxy, but it wants to return http responses and not https responses. I thought that setting the following headers was supposed to tell the server to respond with https: proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; I also tried "proxy_redirect default;". It seems that the jetty/archiva server is not recognizing these or there needs to be something more. I've been scouring forums and as far as I can tell, everything is set as it should be. I'm not sure where else to check at this point. Has anyone had any success with this?

    Read the article

  • http, https and ftp is not working but smtp and imap is working

    - by Unicron
    hi all, yesterday on a computer of a friend a strange thing happened. after booting the ports fo http, https and ftp are closed but e-mail is still working. in the control panel the windows firewall seems active even if he tries to deactivate it. i have a suspision that it is the faul of norton internet security 2010, we have tried to uninstall it, but the uninstallation did not work. when using the removal tool from symantec it just goes to 23% and then it crashes. the process ccSvcHst.exe is still running. how can i safeley remove the rest of norton internet security? thanks in advance [edit] norton internet security 2010 is sucesfully removed, but still no connectivity

    Read the article

  • Ignoring GET parameters in Varnish VCL

    - by JamesHarrison
    Okay: I've got a site set up which has some APIs we expose to developers, which are in the format /api/item.xml?type_ids=34,35,37&region_ids=1000002,1000003&key=SOMERANDOMALPHANUM In this URI, type_ids is always set, region_ids and key are optional. The important thing to note is that the key variable does not affect the content of the response. It is used for internal tracking of requests so we can identify people who make slow or otherwise unwanted requests. In Varnish, we have a VCL like this: if (req.http.host ~ "the-site-in-question.com") { if (req.url ~ "^/api/.+\.xml") { unset req.http.cookie; } } We just strip cookies out and let the backend do the rest as far as times are concerned (this is a hackaround since Rails/authlogic sends session cookies with API responses). At present though, any distinct developers are basically hitting different caches since &key=SOMEALPHANUM is considered as part of the Varnish hash for storage. This is obviously not a great solution and I'm trying to work out how to tell Varnish to ignore that part of the URI.

    Read the article

  • Why does Django's dev server use port 8000 by default?

    - by kojiro
    (My question isn't really about Django. It's about alternative http ports. I just happen to know Django is a relatively famous application that uses 8000 by default, so it's illustrative.) I have a dev server in the wild that we occasionally need to run multiple httpd services on on different ports. When I needed to stand a third service up and we were already using ports 80 and 8080, I discovered our security team has locked port 8000 access from the Internet. I recognize that port 80 is the standard http port, and 8080 is commonly http_alt, but I'd like to make the case to our security team to open 8000 as well. In order to make that case, I hope the answer to this question can provide me with a reasonable argument for using port 8000 over 8080 in some case. Or was it just a random choice with no meaning?

    Read the article

  • Apache Slow Over http, Fast Over https

    - by Josh Pennington
    I have an Apache server running Debian. I am having this very strange situation where loading a page takes about 2 to 3 times longer to load over http than https. The primary use of the website is Magento, but I am seeing similar results with other things that we have loaded on the website. I don't have the first clue where to even look on our server or what the problem could be. Does anyone have any insight as to what could be going on or where to look. Josh

    Read the article

  • Is it safe to use S3 over HTTP from EC2, as opposed to HTTPS

    - by Marc
    I found that there is a fair deal of overhead when uploading a lot of small files to S3. Some of this overhead comes from SSL itself. How safe is it to talk to S3 without SSL when running in EC2? From the awesome comments below, here are some clarifications: this is NOT a question about HTTPS versus HTTP or the sensitivity of my data. I'm trying to get a feeling for the networking and protocol particularities of EC2 and S3. For example Are we guaranteed to be passing through only the AWS network when communicating from EC2 to S3 Can other AWS users (apart from staff) sniff my communications between EC2 and S3 Is authentication on their api done on every call, and thus credentials are passed on every call? Or is there some kind of authenticated session. I am using the jets3t lib. Feedback from people with some AWS experience would be appreciated. Thanks Marc

    Read the article

  • SetEnvIf regex for setting Content-Disposition HTTP header

    - by Erik Sorensen
    I am attempting to use the IHS 7.0/apache 2.2 SetEnvIf directive to set the filename of a downloaded file based on a url parameter. I think I am pretty close, however if there is a space (encoded or otherwise) in the filename - it fails. example url: http://site.com/path/to/filename.ext/file-title=Nice File Name.ext?file-type=foo apache config: SetEnvIf Request_URI "^.*file-title\=(.*)\??.*$" FILENAME=$1 Header unset "Content-Disposition" Header add "Content-Disposition" "attachment; filename=%{FILENAME}e" UnsetEnv FILENAME An application will specify what is now showing up as "Nice File Title.ext" in the example. This all works great if there are no spaces, however - if there is a space the filename to download will just show up as "Nice". There may or may not be a second set of parameters in the query string (?file-type, etc)

    Read the article

  • Attempting to ping RPC endpoint 6001/6004 (Exchange Information Store) on server on Exchange2010

    - by MadBoy
    I have Exchange 2010 in hosting setup like: TMG 2010 as load balancer Exchange 2010 x 2 (CAS,MAILBOX,HUB on each server) AD1, AD2 machines File witness All people currently connect thru OWA or POP3/SMTP and that works fine. The problem is autodiscovery doesn't work and RPC in terms of setting up Outlook doesn't work too. It doesn't work if I am connected with VPN or not. The thing is it used to work. Before reinstall of my machine 2 days ago I was able to get mails successfully thru Outlook that was set up using autodiscovery (but I was getting reports setting up of new clients wasn't working - so not sure why my outlook continued to work). I used https://www.testexchangeconnectivity.com to track it down and basically the message is more or less this: Attempting to ping RPC endpoint 6004 (NSPI Proxy Interface) on server autodiscover.domain.pl. The attempt to ping the endpoint failed. Additional Details The RPC_S_SERVER_UNAVAILABLE error (0x6ba) was thrown by the RPC Runtime process. I tried different solutions like disabling IP v6, followed couple of links and did all they proposed and it's still at the very same point: C:\Users\admin>netstat -a | find "6001" TCP 0.0.0.0:6001 EXCHANGE2:0 LISTENING TCP [::]:6001 EXCHANGE2:0 LISTENING C:\Users\admin>netstat -a | find "6002" C:\Users\admin>netstat -a | find "6003" C:\Users\admin>netstat -a | find "6004" I followed (and few others): http://helewix.com/blog/index.php/Microsoft-Solutions/2011/02/10/exchange-2010-how-to-open-ports-6001-6002-and-6004-on-your-server-for-telnet-to-work-and-rpc-to-be-able-to-connect-2 http://blogs.technet.com/b/exchange/archive/2008/06/20/3405633.aspx http://messagexchange.blogspot.com/2008/12/outlook-anywhere-failing-rpc-end-points.html Although most relate to Exchange 2007 and I have Exchange 2010 but there's not much things I can find on Exchange 2010 for the current problem. After applying all of those solutions error 6004 changed into error 6001 which doesn't bring me to my problems any closer. At this point even thou error was 6001 and 6004 was no more the 6004 port was still closed while 6001 stayed open. Attempting to ping RPC endpoint 6001 (Exchange Information Store) on server autodiscover.domain.pl. The attempt to ping the endpoint failed. Additional Details The RPC_S_SERVER_UNAVAILABLE error (0x6ba) was thrown by the RPC Runtime process. C:\Users\admin>netstat -a | find "6001" TCP 0.0.0.0:6001 EXCHANGE2:0 LISTENING TCP [::]:6001 EXCHANGE2:0 LISTENING C:\Users\admin>netstat -a | find "6002" C:\Users\admin>netstat -a | find "6003" C:\Users\admin>netstat -a | find "6004" So I reverted back to square one. I suspect it's a problem with TMG but really can't be sure. I tried multiple combinations but all fail.

    Read the article

  • serving static assets via http is really slow compared to sshfs (apache2/nginx)

    - by s1lv3r
    After migrating to a new VPS I had some users complaining about slow loading images on their sites. After creating some test files with dd I realized that I can download all files via sshfs with full speed while downloads via web are painfully slow. The larger the file is and the longer the transfer takes, the slower the transfer speed gets. I thought I had some problems with Apache and just spend the whole evening with replacing Apache2 against nginx for static file serving - with no effect at all. No I/O wait states in top. Tons of RAM free, no high CPU utilization and hdparm shows a decent I/O performance at all times. I just have no idea anymore, what's happening on this server. This is a link to a demo file: http://master.dealux.de/file.tgz Anybody an idea what I can check out?

    Read the article

  • Logging all Firefox HTTP Request Headers?

    - by Hayek
    I'm using Ruby+Watir to request pages through Firefox. I would like to record the headers and content of every http request made through the browser. Would it be possible to configure a proxy solution to store this information, either in a file or pipe it into an application? I'm running Ubuntu x64. // Edit: I would like to store the data in logs because I would like to view it later. Preferably, I am looking for a solution that runs quietly in the background and stores the headers/content in files.

    Read the article

  • Problem Activating “Windows Communication Foundation Non-HTTP Activation” feature in Windows 7

    - by Escobar5
    Hello, I'm having the following problem. I'm installing SharePoint 2010 Beta so I need to activate the windows feature "Windows Communication Foundation Non-HTTP Activation". The problem is I cannot activate it, i get the message: "An error has occurred. Not all features were successfully changed" When i look at the log (C:\Windows\Logs\CBS\CBS.log) I found this error: Process output: [l:186 [186]"SMConfigInstaller[Error]: Failed in calling 'StartService' for service 'NetTcpActivator'. Error code: 0x8007042c Anyone can give me a clue of what is happening here?

    Read the article

  • How to optimize mysql databse for serving 1000s of requests at a time ?

    - by Bilal
    How to optimize mysql databse for serving 1000s of requests at a time ? for a site like: linksnappy.com Is it possible to configure 2 seperate mysql servers into one load balancing server? like if one of them is overloaded switch to the next one ? same question for the http requests handling server. Another question : what kind of server do I need to server 1000s of requests at a time ? (http server) you can see the kind of site Im talking about is a download site. the server just dies when we have too many download requests. we currently have intel xeon quad core 2.4ghz with 4gb of ram.

    Read the article

  • Windows HTTP proxy client to pass service requests to VPN

    - by Chris
    I've got access to a network via CheckPoint VPN (Windows client). Problem is, I have a linux box that needs to talk to its web services and the target web servers are inside the VPN. So far, we have been unable to connect linux to the VPN (and I'm not trying to solve that problem at the moment). I'm wondering if (temporarily) I can setup a proxy server on a Windows (XP) box to shuttle HTTP requests back and forth? If so, what'd be a good application to do this? (hopefully free/open-source) TIA

    Read the article

  • Supervisor HTTP Server Port Issue.

    - by Catalina
    I have supervisor setup to manage a few processes. It works perfectly fine when I boot my server, however when I stop it and try to start it again it fails and give's me this error msg: * Starting Supervisor daemon manager... Error: Another program is already listening on a port that one of our HTTP servers is configured to use. Shut this program down first before starting supervisord. For help, use /usr/bin/supervisord -h ...fail! I'm running nginx on port 80 and 4 web servers on ports 8000, 8001, 8002, 8003 Does anyone have any idea of what is going on? When I reboot everything works fine.

    Read the article

  • Nagios3 check_httpname gives 503 response; from command line I get a 200 response

    - by Michael T. Smith
    We're using Nagios to monitor our site (and a bunch of other stuff.) For some odd reason, when I test out the command /usr/lib/nagios/plugins/check_http -H 'domainname.com' the response that comes back is HTTP/1.1 200 OK but when I set up the service to do it: # Check that domain is running define service { hostgroup_name hostgroup service_description host site check_command check_httpname!domainname.com use generic-service notification_interval 1; set > 0 if you want to be renotified } the response that comes back is HTTP/1.1 503 Service Unavailable. Does anyone know why this would be happening?

    Read the article

  • Information about recent code injection from http://superiot.ru

    - by klennepette
    Hello, I manage the hosting for a few dozen websites. Since about a week I've been finding this code in 12 different websites in theindex.php files: <script type="text/javascript" src="http://superiot.ru/**.js"></script> // The name of the actual javascript file differs <!-- some hash here--> Some of the websites are on different servers, some aren't. I'm just wondering if anyone else has been seeing this too. Edit with some more information: All servers are centOS 5.3 PHP versions are either 5.2.9 or 5.2.4 Apache versions are either 2.2.3 or 1.3.39

    Read the article

  • Exchange 2003 HTTP Account Error

    - by Ryaner
    We are trying to get one of our users connected to our Exchange 2003 server using the HTTP method as they already have an existing Exchange account on another server. The setup goes through and they appear to get connected fine however none of the subfolders are listed. Instead we get one folder of "Error-Pls file a Bug". The usual Google search just throws up nothing useful. Does anyone know how to fix this? Or has anyone actually gotten Outlook (2003 or 2007) to connect to an Exchange 2003 server?

    Read the article

  • mod_access for lighttpd causes a 403 error for all POST requests

    - by Sam
    I have found on my debian server that running the lighttpd module mod_access is causing the server to response with a 403 to all POST requests. It's very odd as I have two servers, one is running as I'd expect and the other keeps returning these 403's. They are running identical configs for lighttpd and php. My lighttpd.conf is: https://gist.github.com/4269500 There is also one other custom conf: https://gist.github.com/4269508 I've opened up the servers for requests until I get this fixed, the server that works is http://mercury.isitup.org/ and the one that fails is http://venus.isitup.org/. After working out that disabling mod_access resolves the problem I greped all my lighttpd configs for uses of it (docs). Disabling each line I found didn't help, leading me to think this is perhaps some default behaviour (or bug?)... Has anyone come across this before or know what configuration value I've got wrong? Versions Debian: Debian GNU/Linux 6.0.6 (squeeze) Lighttpd: lighttpd/1.4.28 (ssl) PHP: PHP 5.3.19-1~dotdeb.0 with Suhosin-Patch (cli)

    Read the article

  • IIS7 response size thresholds

    - by DanielM
    I have a customer who is attempting video playback via HTTP progressive download of very large files ( 1 GB). There is no problem once a file is cached at the edge via my CDN, but hits to my origin (first hits prior to edge-cache population) experience stalling and loss of sync between audio and video about an hour and a half into playback. This occurs pretty reliably at that point, suggesting that some threshold somehwere is getting hit. Are there IIS configuration knobs governing HTTP Response size? Other data points: I am unable to replicate this problem. I am looking at client bandwidth and last mile issues. I am looking at possible encoding recipe dependencies. But this problem never came up when we were using a "push" cache configuration (CDN-hosted origin), so something funky serverside at my origin seems like a likely culprit. Thanks ...

    Read the article

  • On Ubuntu installed NginX and Can't find where new WWW root is

    - by Nick Not
    On Ubuntu 13.04 the original WWW was in /var/www/ then I installed NginX and it installed correctly but I can't find the actual folder accessible by http (I looked in /etc/nginx/). I searched for index.htm, index.html and index.php but there are hundreds of results. Is there a command I can run to tell me what folder http is pointed to? I tried searching for this but I am not sure what keywords to use . Places I looked in: /usr/share/nginx/www /usr/share/www /usr/share/html /var/www /etc/nginx/

    Read the article

  • HTTP traffic through PIX VPN from outside site

    - by fwrawx
    I have a remote site with a website that only allows access from the outside IP assigned to our local PIX. I have users connecting to the local networking using a VPN that need to be able to view this remote site. I don't think this works because the packets want to come in and go out over the same (ext) interface. So I'm looking for a way to make this work using the PIX or setting up a service on a server on the local network to act as a middle-man for the HTTP requests. The remote site doesn't support setting up a VPN to our PIX. The remote website is dishing out pages over a non-standard port. Can I use squid or something similar to proxy just one site?

    Read the article

  • Making nginx withstand flood attacks

    - by Tiffany Walker
    How can I make it stand stand against attacks better? Are their plugins. Looking for a way to RATE LIMIT and remain up and not slow down. My Setup: user nobody; # no need for more workers in the proxy mode worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; worker_priority -2; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 40480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 9; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } vhost file: server { error_log /var/log/nginx/vhost-error_log warn; listen 194.145.208.19:80; server_name ipxnow.in www.ipxnow.in; access_log /usr/local/apache/domlogs/ipxnow.in-bytes_log bytes_log; access_log /usr/local/apache/domlogs/ipxnow.in combined; root /home/ipxnowin/public_html; location / { location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { expires 7d; try_files $uri @backend; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; proxy_pass http://194.145.208.19:8081; include proxy.inc; } location @backend { internal; proxy_pass http://194.145.208.19:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { proxy_pass http://194.145.208.19:8081; include proxy.inc; } location ~ /\.ht { deny all; } } and proxy.inc: proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • Websocket & HTTP proxy with server between two firewalls

    - by Dan
    I have a server ("A") running behind a firewall, which serves HTTP and websockets. I have no control over the firewall, but do have an external server ("B") to which the internal server can connect (note that the reverse connection from B to A is not possible due to the firewall). How can I set up some sort of proxy on B such that an Internet client ("C") can access the resources on A? I'd prefer something lightweight—even a Python program or an SSH tunnel (which I've tried without success)—rather than something more heavyweight but robust.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >