Search Results

Search found 6317 results on 253 pages for 'mod proxy ajp'.

Page 61/253 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Reliable access to Internet but not local network (not DNS or proxy issues)

    - by Ian Goldby
    I'm looking for help with a Vista Home Premium laptop that has trouble accessing any resource on our home network, but accesses the Internet just fine. The set-up is this: The Vista laptop and a MacBook Pro connect wirelessly to the router-modem. A Synology DS212j NAS drive has a wired connection to the router-modem. Devices on the local network are always referred to by IP address, so this cannot be a DNS issue. The MacBook Pro connects reliably to the NA via AFP (network shared folders), SMB (network shared folders) and HTTP. The Vista laptop connects to and browses sites on the Internet without any problems. It can log into the NAS via SMB and list the shared folders (so there is nothing wrong with the log-in credentials), but when it tries to open any of the folders Explorer just hangs with the spinning cursor for several minutes and then says "\192.168.1.64\shared\Photos is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions. The specified network name is no longer available." It can ping the NAS successfully. If I try to open the NAS drive's web interface, the browser just hangs. This is the same with IE, Firefox and Chrome. (There is no proxy.) I can log into the NAS drive with FTP and navigate directories, but when I try to list the contents of a directory with more than a handful of entries, the ftp client hangs. I set up a website on the MacBook. The Vista laptop was able to load some of the pages, but loading any of the images was very hit and miss. Images embedded in HTML pages never worked no matter how many times I reloaded the page, but when I linked directly to the image it did load (though several attempts were sometimes needed). I tried all of this with the Windows Firewall turned off, and with AVG turned off. That made no difference. I'd really appreciate any suggestions anyone can make. The fact that the Vista laptop has trouble with HTTP and FTP as well as SMB connections suggests to me that this is a problem at the TCP level or below. But don't forget it accesses sites outside the LAN with no problems.

    Read the article

  • How to setup Proxy Cache with Nginx and Passenger

    - by tiny
    I use Nginx and Passenger for my rails application. I want to use proxy cache to cache my pages. However, every request go direct to my rails application. I don't know what wrong with my configuration. Below is my configuration: user www-data; worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-2.2.15; passenger_ruby /usr/bin/ruby1.8; passenger_max_pool_size 6; passenger_max_instances_per_app 1; passenger_pool_idle_time 0; rails_spawn_method conservative; include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 512; sendfile on; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_http_version 1.0; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css text/javascript application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss; proxy_cache_path /var/www/cache/webapp levels=1:2 keys_zone=webapp:8m max_size=1000m inactive=600m; include vhosts/*.conf; include /opt/nginx/conf/sites-enabled/*; root /var/www; } server { listen 127.0.0.1:3008; server_name localhost; root /var/www/yoolk_web_app/public; # <--- be sure to point to 'public'! passenger_enabled on; rails_env development; passenger_use_global_queue on; } server { listen 80; server_name webpage.dev; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; error_page 503 http://$host/maintenance.html; location ~* (css|js|png|jpe?g|gif|ico)$ { root /var/www/web_app/public; expires max; } location / { proxy_pass http://127.0.0.1:3008/; proxy_cache webapp; proxy_cache_valid 200 10m; } #More Location }

    Read the article

  • How do i route TCP connections via TOR? [on hold]

    - by acidzombie24
    I was reading about torchat which is essentially an anonymous chat program. It sounded cool so i wanted to experiment with making my own. First i wrote a test to grab a webpage using Http. Sicne .NET doesnt support SOCKS4A/SOCKS5 i used privoxy and my app worked. Then i switch to a TCP echo test and privoxy doesnt support TCP so i searched and installed 6+ proxy apps (freecap, socat, freeproxy, delegate are the ones i can remember from the top of my head, i also played with putty bc i know it supports tunnels and SOCK5) but i couldnt successfully get any of them to work let alone get it running with my http test that privoxy easily and painlessly did. What may i use to get TCP connections going through TOR? I spent more then 2 hours without success. I don't know if i am looking for a relay, tunnel, forwarder, proxy or a proxychain which all came up in my search. I use the config below for .NET. I need TCP working but i am first testing with http since i know i had it working using privoxy. What apps and configs do i use to get TCP going through tor? <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.net> <defaultProxy enabled="true"> <proxy bypassonlocal="True" proxyaddress="http://127.0.0.1:8118"/> </defaultProxy> <settings> <httpWebRequest useUnsafeHeaderParsing="true"/> </settings> </system.net> </configuration> -edit- Thanks to Bernd i have a solution. Here is the code i ended up writing. It isn't amazing but its fair. static NetworkStream ConnectSocksProxy(string proxyDomain, short proxyPort, string host, short hostPort, TcpClient tc) { tc.Connect(proxyDomain, proxyPort); if (System.Text.RegularExpressions.Regex.IsMatch(host, @"[\:/\\]")) throw new Exception("Invalid Host name. Use FQDN such as www.google.com. Do not have http, a port or / in it"); NetworkStream ns = tc.GetStream(); var HostNameBuf = new ASCIIEncoding().GetBytes(host); var HostPortBuf = BitConverter.GetBytes(IPAddress.HostToNetworkOrder(hostPort)); if (true) //5 { var bufout = new byte[128]; var buflen = 0; ns.Write(new byte[] { 5, 1, 0 }, 0, 3); buflen = ns.Read(bufout, 0, bufout.Length); if (buflen != 2 || bufout[0] != 5 || bufout[1] != 0) throw new Exception(); var buf = new byte[] { 5, 1, 0, 3, (byte)HostNameBuf.Length }; var mem = new MemoryStream(); mem.Write(buf, 0, buf.Length); mem.Write(HostNameBuf, 0, HostNameBuf.Length); mem.Write(new byte[] { HostPortBuf[0], HostPortBuf[1] }, 0, 2); var memarr = mem.ToArray(); ns.Write(memarr, 0, memarr.Length); buflen = ns.Read(bufout, 0, bufout.Length); if (bufout[0] != 5 || bufout[1] != 0) throw new Exception(); } else //4a { var bufout = new byte[128]; var buflen = 0; var mem = new MemoryStream(); mem.WriteByte(4); mem.WriteByte(1); mem.Write(HostPortBuf, 0, 2); mem.Write(BitConverter.GetBytes(IPAddress.HostToNetworkOrder(1)), 0, 4); mem.WriteByte(0); mem.Write(HostNameBuf, 0, HostNameBuf.Length); mem.WriteByte(0); var memarr = mem.ToArray(); ns.Write(memarr, 0, memarr.Length); buflen = ns.Read(bufout, 0, bufout.Length); if (buflen != 8 || bufout[0] != 0 || bufout[1] != 90) throw new Exception(); } return ns; } Usage using (TcpClient client = new TcpClient()) using (var ns = ConnectSocksProxy("127.0.0.1", 9050, "website.com", 80, client)) {...}

    Read the article

  • Nginx reverse proxy with separate aliases

    - by gabeDel
    Interesting question I have this python code: import sys, bottle, gevent from bottle import * from gevent import * from gevent.wsgi import WSGIServer @route("/") def index(): yield "/" application=bottle.default_app() WSGIServer(('', port), application, spawn=None).serve_forever() that runs standalone with nignx infront of it as a reverse proxy. Now each of these pieces of code run separately but I run multiple of these per domain per project(directory) but the code thinks for some reason that it is top level and its not so when you go to mydomain.com/something it works but if you go to mydomain.com/something/ you will get an error. No I have tested and figured out that nginx is stripping the "something" from the request/query so that when you go to mydomain.com/something/ the code thinks you are going to mydomain.com// how do I get nginx to stop removing this information? Nginx site code: upstream mydomain { server 127.0.0.1:10100 max_fails=5 fail_timeout=10s; } upstream subdirectory { server 127.0.0.1:10199 max_fails=5 fail_timeout=10s; } server { listen 80; server_name mydomain.com; access_log /var/log/nginx/access.log; location /sub { proxy_pass http://subdirectory/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } location /subdir { proxy_pass http://subdirectory/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } }

    Read the article

  • .net question - where are the DefaultCredentials stored/accessed from for a WinForms v3.5 app?

    - by Greg
    Hi, Where are the DefaultCredentials stored/accessed from for a WinForms v3.5 app? That is if I am using the settings for defaultProxy for my Winforms v3.5 application, and set a proxy server address here, exactly where does/can the username/password come from? Or in other words where does the framework source the "default credentials" for a winforms application running on the client PC? <defaultProxy enabled="true|false" useDefaultCredentials="true|false" <bypasslist> … </bypasslist> <proxy> … </proxy> <module> … </module> /> Background - apparently ClickOnce can use this for a client side application, however I'm trying to work out where click once would get this defaultCredential from, for a user who is running the clickonce install for my winforms application.

    Read the article

  • JDBC connections: How to specify the port for data-transfer?

    - by LeO
    I wanto to run my JDBC-connection (either Oracle or MSSQL) through a proxy-server. Reason for this is to have additional controls of the traffic, especially for developing. I know, I could specify the proxy, which runs on my machine, and the port in the connection-string. But the specified connection-settings are only taken as some kind of handshake to agree on which port the data is finally transferred. And this is defenitly not the port which I have under proxy-control. So, does anybody have an idea, how to specify the port for the data-transfer? I would prefer if this could be done in the connection-string. The same issue applies for Oracle and MSSQL. Thx LeO

    Read the article

  • Post request with body_stream and parameters

    - by Damien MATHIEU
    Hello, I'm building some kind of proxy. When I call some url in a rack application, I forward that request to an other url. The request I forward is a POST with a file and some parameters. I want to add more parameters. But the file can be quite big. So I send it with Net::HTTP#body_stream instead of Net::HTTP#body. I get my request as a Rack::Request object and I create my Net::HTTP object with that. req = Net::HTTP::Post.new(request.path_info) req.body_stream = request.body req.content_type = request.content_type req.content_length = request.content_length http = Net::HTTP.new(@host, @port) res = http.request(req) I've tried several ways to add the proxy's parameters. But it seems nothing in Net::HTTP allows to add parameters to a body_stream request, only to a body one. Is there a simpler way to proxy a rack request like that ? Or a clean way to add my parameters to my request ?

    Read the article

  • HTML form submits and the hostname changes to IP address

    - by Shamik
    I am facing a peculiar problem. The problem is, my webapp is being installed behind a proxy. The request gets submitted to the proxy which forwards the request to the original host that is running the websphere web application. The problem I am facing is, when I access the webapp, its URL looks like the below http://www.myproxy.com Lets say I get a form on this URL, when I submit the form, it is getting submitted to another URL - http://10.1.2.87 Since the URL is changing, application server thinks it is a different session and throws the login page again. The login page comes thru a filter which checks whether user is already authenticated in the session or not. I do not have much knowledge on proxy settings .. where do you think is the problem?

    Read the article

  • What is the proper way to handle a fully qualified domain in a GET request?

    - by Mark P Neyer
    I'm writing a proxy server. When I use curl to fetch a page, say http://www.foo.com/pants, curl makes the following request: GET /pants HTTP/1.1 When I have curl send that request through my local proxy, curl changes the GET request to: GET http://www.foo.com/pants HTTP/1.1 This change causes the foo.com server return a 404. Is foo.com broken? Or is the fully qualified domain name only meaningful to proxy servers? Should I always strip http://domain from the requests I send out? Thanks!

    Read the article

  • Why - Could not find worker with name 'jk-manager' in uri map post processing?

    - by Hardbone
    I am using apache2 + mod_jk(ajp protocol) + tomcat7. but I always get the error below: [Sat Mar 30 17:30:54.691 2013] [25238:3074365824] [info] init_jk::mod_jk.c (3365): mod_jk/1.2.37 initialized [Sat Mar 30 17:30:54.691 2013] [25238:3074365824] [error] extension_fix::jk_uri_worker_map.c (564): Could not find worker with name 'jk-manager' in uri map post processing. [Sat Mar 30 17:30:54.691 2013] [25238:3074365824] [error] extension_fix::jk_uri_worker_map.c (564): Could not find worker with name 'jk-status' in uri map post processing. Any clue?

    Read the article

  • Apache2 - mod_expire and mod_rewrite not working in httpd.conf - serving content from tomcat

    - by Ankit Agrawal
    Hi, I am using apache2 server running on debian which forwards all the http request to tomcat installed on same machine. I have two files under my /etc/apache2/ folder apache2.conf and httpd.conf I modified httpd.conf file to look like following. # forward all http request on port 80 to tomcat ProxyPass / ajp://127.0.0.1:8009/ ProxyPassReverse / ajp://127.0.0.1:8009/ # gzip text content AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE text/javascript AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript DeflateCompressionLevel 9 BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # Turn on Expires and mark all static content to expire in a week # unset last modified and ETag ExpiresActive On ExpiresDefault A0 <FilesMatch "\.(jpg|jpeg|png|gif|js|css|ico)$" ExpiresDefault A604800 Header unset Last-Modified Header unset ETag FileETag None Header append Cache-Control "max-age=604800, public" </FilesMatch RewriteEngine On # rewrite all www.example.com/content/XXX-01.js and YYY-01.css files to XXX.js and YYY.css RewriteRule ^content/(js|css)/([a-z]+)-([0-9]+)\.(js|css)$ /content/$1/$2.$4 # remove all query parameters from URL after we are done with it RewriteCond %{THE_REQUEST} ^GET\ /.*\;.*\ HTTP/ RewriteCond %{QUERY_STRING} !^$ RewriteRule .* http://example.com%{REQUEST_URI}? [R=301,L] # rewrite all www.example.com to example.com RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] I want to achieve following. forward all traffic to tomcat GZIP all the text content. Put 1 week expiry header to all static files and unset ETag/last modified header. rewrite all js and css file to certain format. remove all the query parameters from URL forward all www.example.com to example.com The problem is only 1 and 2 are working. I tried a lot with many combinations but the expire and rewrite rule (3-6) do not work at all. I also tried moving these rules to apache2.conf and .htaccess files but it didn't work either. It does not give any error but these rules are simple ignored. expires and rewrite modules are ENABLED. Please let me know what should I do to fix this. 1. Do I need to add something else in httpd.conf file (like Options +FollowSymLink) or something else? 2. Do I need to add something in apache2.conf file? 3. Do I need to move these rules to .htaccess file? If yes, what should I write in that file and where should I keep that file? in /etc/apache2/ folder or /var/www/ folder? 4. Any other info to make this work? Thanks, Ankit

    Read the article

  • Apache2 - mod_expire and mod_rewrite not working in httpd.conf - serving content from tomcat

    - by Ankit Agrawal
    I am using apache2 server running on debian which forwards all the http request to tomcat installed on same machine. I have two files under my /etc/apache2/ folder apache2.conf and httpd.conf I modified httpd.conf file to look like following. # forward all http request on port 80 to tomcat ProxyPass / ajp://127.0.0.1:8009/ ProxyPassReverse / ajp://127.0.0.1:8009/ # gzip text content AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE text/javascript AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript DeflateCompressionLevel 9 BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # Turn on Expires and mark all static content to expire in a week # unset last modified and ETag ExpiresActive On ExpiresDefault A0 <FilesMatch "\.(jpg|jpeg|png|gif|js|css|ico)$" ExpiresDefault A604800 Header unset Last-Modified Header unset ETag FileETag None Header append Cache-Control "max-age=604800, public" </FilesMatch RewriteEngine On # rewrite all www.example.com/content/XXX-01.js and YYY-01.css files to XXX.js and YYY.css RewriteRule ^content/(js|css)/([a-z]+)-([0-9]+)\.(js|css)$ /content/$1/$2.$4 # remove all query parameters from URL after we are done with it RewriteCond %{THE_REQUEST} ^GET\ /.*\;.*\ HTTP/ RewriteCond %{QUERY_STRING} !^$ RewriteRule .* http://example.com%{REQUEST_URI}? [R=301,L] # rewrite all www.example.com to example.com RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] I want to achieve following. forward all traffic to tomcat GZIP all the text content. Put 1 week expiry header to all static files and unset ETag/last modified header. rewrite all js and css file to certain format. remove all the query parameters from URL forward all www.example.com to example.com The problem is only 1 and 2 are working. I tried a lot with many combinations but the expire and rewrite rule (3-6) do not work at all. I also tried moving these rules to apache2.conf and .htaccess files but it didn't work either. It does not give any error but these rules are simple ignored. expires and rewrite modules are ENABLED. Please let me know what should I do to fix this. 1. Do I need to add something else in httpd.conf file (like Options +FollowSymLink) or something else? 2. Do I need to add something in apache2.conf file? 3. Do I need to move these rules to .htaccess file? If yes, what should I write in that file and where should I keep that file? in /etc/apache2/ folder or /var/www/ folder? 4. Any other info to make this work? Thanks, Ankit

    Read the article

  • Unable to install mod_wsgi on CentOS 5.5 VPS...

    - by jasonaburton
    I am trying to install mod_wsgi on my VPS, but it won't work. This is what I am doing: wget http://modwsgi.googlecode.com/files/mod_wsgi-2.5.tar.gz tar xzvf mod_wsgi-2.5.tar.gz cd mod_wsgi-2.5 ./configure --with-python=/opt/python2.5/bin/python After I run the above command, I get this error: checking for apxs2... no checking for apxs... no checking Apache version... ./configure: line 1298: apxs: command not found ./configure: line 1298: apxs: command not found ./configure: line 1299: /: is a directory ./configure: line 1461: apxs: command not found configure: creating ./config.status config.status: creating Makefile config.status: error: cannot find input file: Makefile.in Through some research I've discovered that I need to modify my command: ./configure --with-apxs=/usr/local/apache/bin/apxs \ --with-python=/usr/local/bin/python But, /usr/local/apache/ doesn't exist, or so that's what it is telling me. If it doesn't exist, how do I create it with all the files needed, or if apache is located elsewhere on my VPS where would it be located? I'd also like to mention that I ran a command to install apache before this entire deal: yum install httpd so I assumed that was all I needed but apparently not (I am very new at all this server administration stuff so please be gentle) EDIT: This is the tutorial that I have been using to get this all set up: http://binarysushi.com/blog/2009/aug/19/CentOS-5-3-python-2-5-virtualevn-mod-wsgi-and-mod-rpaf/ I got stuck at the heading "Installing mod_wsgi" Thanks for any help!

    Read the article

  • Connection reset by peer: mod_fcgid: error reading data from FastCGI server Issues

    - by user145857
    Help is greatly needed for our server. We are experiencing random "Connection reset by peer: mod_fcgid: error reading data from FastCGI server" errors which cause a 500 internal server error. If the page is then reloaded it loads normally as it should. We are running MPM Worker with mod FCGID to handle PHP. We had APC cache enabled but disabled it recently to see if it would fix the problem, but the random mod FCGID errors are still continuing. No other opcode cache is active now. Our settings are below: <IfModule worker.c> MinSpareThreads 25 MaxSpareThreads 150 ThreadsPerChild 25 ThreadLimit 100 ServerLimit 700 MaxClients 700 MaxRequestsPerChild 0 </IfModule> <IfModule mod_fcgid.c> FcgidMaxRequestLen 1073741824 FcgidMaxRequestsPerProcess 2000 FcgidMaxProcessesPerClass 100 FcgidMinProcessesPerClass 0 FcgidConnectTimeout 300 FcgidIOTimeout 900 FcgidFixPathinfo 1 FcgidIdleTimeout 300 FcgidIdleScanInterval 120 FcgidBusyTimeout 300 FcgidBusyScanInterval 120 FcgidErrorScanInterval 12 FcgidZombieScanInterval 12 FcgidProcessLifeTime 3600 </IfModule> The server is a 64 core 2.1 GHZ 94 GB RAM so it has some power. Some of the fcgid timeout settings are higher because we run large reports which take up to 15 minutes. Any help is greatly appreciated! Just to clarify, the random fcgid errors are occurring when a user clicks a page on our site and the 500 error page loads instantly. This is random and occurrs less than 1% of the time but it is still an issue.

    Read the article

  • Installing mod_pagespeed (Apache module) on CentOS

    - by Sid B
    I have a CentOS (5.7 Final) system on which I already have Apache (2.2.3) installed. I have installed mod_pagespeed by following the instructions on: http://code.google.com/speed/page-speed/download.html and got the following while installing: # rpm -U mod-pagespeed-*.rpm warning: mod-pagespeed-beta_current_x86_64.rpm: Header V4 DSA signature: NOKEY, key ID 7fac5991 [ OK ] atd: [ OK ] It does appear to be installed properly: # apachectl -t -D DUMP_MODULES Loaded Modules: ... pagespeed_module (shared) And I've made the following changes in /etc/httpd/conf.d/pagespeed.conf Added: ModPagespeedEnableFilters collapse_whitespace,elide_attributes ModPagespeedEnableFilters combine_css,rewrite_css,move_css_to_head,inline_css ModPagespeedEnableFilters rewrite_javascript,inline_javascript ModPagespeedEnableFilters rewrite_images,insert_img_dimensions ModPagespeedEnableFilters extend_cache ModPagespeedEnableFilters remove_quotes,remove_comments ModPagespeedEnableFilters add_instrumentation Commented out the following lines in mod_pagespeed_statistics <Location /mod_pagespeed_statistics> **# Order allow,deny** # You may insert other "Allow from" lines to add hosts you want to # allow to look at generated statistics. Another possibility is # to comment out the "Order" and "Allow" options from the config # file, to allow any client that can reach your server to examine # statistics. This might be appropriate in an experimental setup or # if the Apache server is protected by a reverse proxy that will # filter URLs in some fashion. **# Allow from localhost** **# Allow from 127.0.0.1** SetHandler mod_pagespeed_statistics </Location> As a separate note, I'm trying to run the prescribed system tests as specified on google's site, but it gives the following error. I'm averse to updating wget on my server, as I'm sure there's no need for it for the actual module to function correctly. ./system_test.sh www.domain.com You have the wrong version of wget. 1.12 is required.

    Read the article

  • hosting 2 webapps under 1 apache/tomcat

    - by mkoryak
    I am trying to host multiple webapps under tomcat 6 behind apache2 via mod_jk. I am at my wits end with this. the problem i am facing that both domains seems to point to a single tomcat 'domain'. my server.xml looks like this: <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <Connector port="8010" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="dogself.com"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="dogself.com" appBase="webapps-dogself" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> <Host name="natashacarter.com" appBase="webapps-natashacarter.com" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> my workers.properties looks like this: worker.list=dogself,natashacarter worker.dogself.port=8009 worker.dogself.host=dogself.com worker.dogself.type=ajp13 worker.natashacarter.port=8010 worker.natashacarter.host=natashacarter.com worker.natashacarter.type=ajp13 finally my apache vhosts look like this: <VirtualHost 69.164.218.75:80> ServerName dogself.com DocumentRoot /srv/www/dogself.com/public_html/ ErrorLog /srv/www/dogself.com/logs/error.log CustomLog /srv/www/dogself.com/logs/access.log combined JkMount /* dogself </VirtualHost> and <VirtualHost 69.164.218.75:80> ServerName natashacarter.com DocumentRoot /srv/www/dogself.com/public_html/ ErrorLog /srv/www/dogself.com/logs/error.log CustomLog /srv/www/dogself.com/logs/access.log combined JkMount /* natashacarter </VirtualHost> when i log into manager webapp on both dogself.com and natashacarter.com, i can deploy to a context path on dogself, and that same contextpath will appear on natashacarter - so i know for a fact that this is the same tomcat domain. edit: just found this in my mod_jk log [Sun Feb 20 21:15:43 2011] [28546:3075521168] [warn] map_uri_to_worker_ext::jk_uri_worker_map.c (962): Uri * is invalid. Uri must start with / [Sun Feb 20 21:16:44 2011] [28548:3075521168] [info] ajp_send_request::jk_ajp_common.c (1496): (dogself) all endpoints are disconnected, detected by connect check (1), cping (0), send (0) but not sure why dogself wouldnt respond please help a brother out

    Read the article

  • ubuntu/apt-get update said "Failed to Fetch http:// .... 404 not found"

    - by lindenb
    Hi all, I'm trying to run apt-get update on ubuntu 9.10 I've configured my proxy server and I can access the internet without any problem: /etc/apt# wget "http://www.google.com" Resolving (...) Proxy request sent, awaiting response... 200 OK Length: 292 [text/html] Saving to: `index.html' 100%[=================================================================================================================================>] 292 --.-K/s in 0s 2010-04-02 17:20:33 (29.8 MB/s) - `index.html' saved [292/292] But when I tried to use apt-get I got the following message: Ign http://archive.ubuntu.com karmic Release.gpg Ign http://ubuntu.univ-nantes.fr karmic Release.gpg Ign http://ubuntu.univ-nantes.fr karmic/main Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic/restricted Translation-en_US Ign http://archive.ubuntu.com karmic Release Ign http://ubuntu.univ-nantes.fr karmic/multiverse Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic/universe Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic-updates Release.gpg Ign http://archive.ubuntu.com karmic/main Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/main Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic-updates/restricted Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic-updates/multiverse Translation-en_US Ign http://archive.ubuntu.com karmic/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/universe Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic-security Release.gpg Ign http://archive.ubuntu.com karmic/main Sources Ign http://ubuntu.univ-nantes.fr karmic-security/main Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic-security/restricted Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic-security/multiverse Translation-en_US Ign http://archive.ubuntu.com karmic/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic-security/universe Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic Release Err http://archive.ubuntu.com karmic/main Sources 404 Not Found Ign http://ubuntu.univ-nantes.fr karmic-updates Release Ign http://ubuntu.univ-nantes.fr karmic-security Release Err http://archive.ubuntu.com karmic/restricted Sources 404 Not Found Ign http://ubuntu.univ-nantes.fr karmic/main Packages Ign http://ubuntu.univ-nantes.fr karmic/restricted Packages Ign http://ubuntu.univ-nantes.fr karmic/multiverse Packages Ign http://ubuntu.univ-nantes.fr karmic/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic/main Sources Ign http://ubuntu.univ-nantes.fr karmic/universe Sources Ign http://ubuntu.univ-nantes.fr karmic/universe Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/main Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/restricted Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/multiverse Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/main Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/universe Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/universe Packages Ign http://ubuntu.univ-nantes.fr karmic-security/main Packages Ign http://ubuntu.univ-nantes.fr karmic-security/restricted Packages Ign http://ubuntu.univ-nantes.fr karmic-security/multiverse Packages Ign http://ubuntu.univ-nantes.fr karmic-security/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic-security/main Sources Ign http://ubuntu.univ-nantes.fr karmic-security/universe Sources Ign http://ubuntu.univ-nantes.fr karmic-security/universe Packages Ign http://ubuntu.univ-nantes.fr karmic/main Packages Ign http://ubuntu.univ-nantes.fr karmic/restricted Packages Ign http://ubuntu.univ-nantes.fr karmic/multiverse Packages Ign http://ubuntu.univ-nantes.fr karmic/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic/main Sources Ign http://ubuntu.univ-nantes.fr karmic/universe Sources Ign http://ubuntu.univ-nantes.fr karmic/universe Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/main Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/restricted Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/multiverse Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/main Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/universe Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/universe Packages Ign http://ubuntu.univ-nantes.fr karmic-security/main Packages Ign http://ubuntu.univ-nantes.fr karmic-security/restricted Packages Ign http://ubuntu.univ-nantes.fr karmic-security/multiverse Packages Ign http://ubuntu.univ-nantes.fr karmic-security/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic-security/main Sources Ign http://ubuntu.univ-nantes.fr karmic-security/universe Sources Ign http://ubuntu.univ-nantes.fr karmic-security/universe Packages Err http://ubuntu.univ-nantes.fr karmic/main Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic/restricted Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic/multiverse Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic/restricted Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic/main Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic/universe Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic/universe Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-updates/main Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-updates/restricted Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-updates/multiverse Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-updates/restricted Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-updates/main Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-updates/universe Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-updates/universe Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-security/main Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-security/restricted Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-security/multiverse Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-security/restricted Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-security/main Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-security/universe Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-security/universe Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/karmic/main/source/Sources.gz 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/karmic/restricted/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic/main/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic/restricted/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic/multiverse/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic/restricted/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic/main/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic/universe/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic/universe/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-updates/main/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-updates/restricted/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-updates/multiverse/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-updates/restricted/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-updates/main/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-updates/universe/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-updates/universe/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-security/main/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-security/restricted/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-security/multiverse/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-security/restricted/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-security/main/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-security/universe/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-security/universe/binary-i386/Packages.gz 404 Not Found apt.conf However I can 'see' those files with firefox. more /etc/apt/apt.conf Acquire::http::proxy "http://www.myproxyname.fr:3128"; I also tried with port '80', or with a blank /etc/apt/apt.conf source.list grep -v "#" /etc/apt/sources.list deb http://ubuntu.univ-nantes.fr/ubuntu/ karmic main restricted multiverse deb http://ubuntu.univ-nantes.fr/ubuntu/ karmic-updates main restricted multiverse deb http://ubuntu.univ-nantes.fr/ubuntu/ karmic universe deb http://ubuntu.univ-nantes.fr/ubuntu/ karmic-updates universe deb http://ubuntu.univ-nantes.fr/ubuntu/ karmic-security main restricted multiverse deb http://ubuntu.univ-nantes.fr/ubuntu/ karmic-security universe does anyone knows how to fix this ? Thanks Pierre

    Read the article

  • Accessing Squid Proxy over internet

    - by prateekdayal
    Hi, I recently finished installing Squid on a VPS I have in the US and its working fine locally (I verified by setting http_proxy variable and using lynx). I want to access this proxy over the internet (as an anonymizer) so that I can see how some ads show up for US traffic on my website. I have setup authentication so abuse is not a problem. However, I am not able to access the proxy over the internet. I have set the following rule in squid.conf http_access allow all Is this not possible to do what I want or I am missing something? The port 3128 is open in the firewall so that is not an issue. Squid is running on 0.0.0.0 Thanks Prateek

    Read the article

  • Safari can’t establish a secure connection to the server

    - by Haris
    I am using Mac OS X 10.5.8 behind a company firewall and have proxy settings and username / password through which I can connect to internet. The internet is working as I am posting this question through it, but if I try to open Facebook or Gmail the following message appears: Safari can’t open the page “https://www.google.com/accounts/ServiceLogin?[..]” because Safari can’t establish a secure connection to the server “www.google.com” What could be wrong?

    Read the article

  • Nginx: Can I cache a URL matching a pattern at a different URL?

    - by Josh French
    I have a site with some URLs that look like this: /prefix/ID, where /prefix is static and ID is unique. Using Nginx as a reverse proxy, I'd like to cache these pages at the /ID portion only, omitting the prefix. Can I configure Nginx so that a request for the original URL is cached at the shortened URL? I tried this (I'm omitting some irrelevant parts) but obviously it's not the correct solution: http { map $request_uri $page_id { default $request_uri; ~^/prefix/(?<id>.+)$ $id; } location / { proxy_cache_key $page_id } }

    Read the article

  • How do I get rid of HTTP_CACHE_CONTROL header in squid 3?

    - by Arsen Zahray
    I'm trying to configure an anonymous proxy using squid. I've set forwarded_for delete via delete but Squid 3 still adds an other header to the web requests that go through it: HTTP_CACHE_CONTROL = max-age=259200 I've tried cache_control delete but that doesn't work. How do I get rid of squid's cache_control header? I don't want for it to interfere with the actual web requests that contain cache-control header; But I want for it not to attach its own header

    Read the article

  • How to configure nginx so it works with Express?

    - by Michal Stefanow
    I'm trying to configure nginx so it proxy_pass requests to my node apps. Question on StackOverflow got many upvotes: http://stackoverflow.com/questions/5009324/node-js-nginx-and-now and I'm using config from there. (but since question is about server configuration it is supposed to be on ServerFault) Here is the nginx configuration: server { listen 80; listen [::]:80; root /var/www/services.stefanow.net/public_html; index index.html index.htm; server_name services.stefanow.net; location / { try_files $uri $uri/ =404; } location /test-express { proxy_pass http://127.0.0.1:3002; } location /test-http { proxy_pass http://127.0.0.1:3003; } } Using plain node: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(3003, '127.0.0.1'); console.log('Server running at http://127.0.0.1:3003/'); It works! Check: http://services.stefanow.net/test-http Using express: var express = require('express'); var app = express(); // app.get('/', function(req, res) { res.redirect('/index.html'); }); app.get('/index.html', function(req, res) { res.send("blah blah index.html"); }); app.listen(3002, "127.0.0.1"); console.log('Server running at http://127.0.0.1:3002/'); It doesn't work :( See: http://services.stefanow.net/test-express I know that something is going on. a) test-express is NOT running b) text-express is running (and I can confirm it is running via command line while ssh on the server) root@stefanow:~# service nginx restart * Restarting nginx nginx [ OK ] root@stefanow:~# curl localhost:3002 Moved Temporarily. Redirecting to /index.html root@stefanow:~# curl localhost:3002/index.html blah blah index.html I tried setting headers as described here: http://www.nginxtips.com/how-to-setup-nginx-as-proxy-for-nodejs/ (still doesn't work) proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; I also tried replacing '127.0.0.1' with 'localhost' and vice versa Please advise. I'm pretty sure I miss some obvious detail and I would like to learn more. Thank you.

    Read the article

  • Boost::Thread or fork() : Multithreaded HTTP Proxy

    - by osmano807
    I'm testing boost::thread on a system. It happens that I needed to act as a fork(), because one thread modifies the other variables, even member variables of class I do the project using fork() or is there some alternative still using boost::thread? Basically I run this program in Linux and maybe FreeBSD. It is an http proxy,accept() in main thread, and a function that accepts a class (where there is the file descriptor socket) in a secondary thread that makes the service. Is there a better way to implement a proxy?

    Read the article

  • Using OpenVPN, yet netflix.com blocks access

    - by user837848
    I have set up an OpenVPN server on a VPS in the USA and configured it to route all clients traffic through it. Everything seems to work fine regarding the VPN connection in gerneral. All ip lookup sites show me the us server's ip address and even hulu.com works(it won't work if you are not in the usa). But for some reason netflix.com says "Sorry, Netflix is not available in your country yet.". So I thought that netflix probably uses some more sophisticated ways to determine your location beyond just your ip address. But I could not find a way to get it to work until I dropped the idea of using a VPN and instead connected to the server via a simple socks tunnel with ssh by running: ssh -D 9999 user@serverip All I had to do was changing the key network.proxy.socks_remote_dns in Firefox from false to true to prevent DNS leaks and setting up the socks proxy. Then I could finally watch netflix.com. As a result I concluded that there is nothing in the browser(or something like system timezone) that tells netflix the location, so it has to have something to do with the OpenVPN config. After that I used tcpdump to log all the traffic on the server's network interface venet0 (OpenVZ VPS), visited netflix.com on the client while first connected to the VPN and then connected via socks tunnel and afterwards compared both outputs. The only thing that caught my eye was that while using the socks tunnel the server mainly used ipv6 to connect to netflix whereas it only used ipv4 when the client was connected to the OpenVPN server. But I don't get how that could make such a difference. So what am I missing? Is there a way to configure OpenVPN to also use ipv6 to connect to a website although there is only an ipv4 connection between the VPS and the client? Here is the server.conf of the OpenVPN server (OpenVZ VPS) local serverip port 443 proto tcp dev tun ca ./easy-rsa2/keys/ca.crt cert ./easy-rsa2/keys/vps1.crt key ./easy-rsa2/keys/vps1.key # This file should be kept secret dh ./easy-rsa2/keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" client-to-client keepalive 10 120 tls-auth ta.key 0 # This file is secret cipher AES-256-CBC comp-lzo max-clients 4 user nobody group nogroup persist-key persist-tun status openvpn-status.log log-append openvpn.log verb 3 iptables forwarding iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j SNAT --to-source serverip (enabled ipv4 forwarding) I have tried everything always on a Win7 and a Debian client with only ipv4 connections and always made sure that they use the correct DNS server (tested with ipleak.net and tcpdump / wireshark). client.conf: client dev tun proto tcp remote serverip 443 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client.crt key client.key ns-cert-type server tls-auth ta.key 1 cipher AES-256-CBC comb-lzo verb 3

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >