Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 73/537 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • HTML form submits and the hostname changes to IP address

    - by Shamik
    I am facing a peculiar problem. The problem is, my webapp is being installed behind a proxy. The request gets submitted to the proxy which forwards the request to the original host that is running the websphere web application. The problem I am facing is, when I access the webapp, its URL looks like the below http://www.myproxy.com Lets say I get a form on this URL, when I submit the form, it is getting submitted to another URL - http://10.1.2.87 Since the URL is changing, application server thinks it is a different session and throws the login page again. The login page comes thru a filter which checks whether user is already authenticated in the session or not. I do not have much knowledge on proxy settings .. where do you think is the problem?

    Read the article

  • What is the proper way to handle a fully qualified domain in a GET request?

    - by Mark P Neyer
    I'm writing a proxy server. When I use curl to fetch a page, say http://www.foo.com/pants, curl makes the following request: GET /pants HTTP/1.1 When I have curl send that request through my local proxy, curl changes the GET request to: GET http://www.foo.com/pants HTTP/1.1 This change causes the foo.com server return a 404. Is foo.com broken? Or is the fully qualified domain name only meaningful to proxy servers? Should I always strip http://domain from the requests I send out? Thanks!

    Read the article

  • Authenticating Apache HTTPd against multiple LDAP servers with expired accounts

    - by Brian Bassett
    We're using mod_authnz_ldap and mod_authn_alias in Apache 2.2.9 (as shipped in Debian 5.0, 2.2.9-10+lenny7) to authenticate against multiple Active Directory domains for hosting a Subversion repository. Our current configuration is: # Turn up logging LogLevel debug # Define authentication providers <AuthnProviderAlias ldap alpha> AuthLDAPBindDN "CN=Subversion,OU=Service Accounts,O=Alpha" AuthLDAPBindPassword [[REDACTED]] AuthLDAPURL ldap://dc01.alpha:3268/?sAMAccountName?sub? </AuthnProviderAlias> <AuthnProviderAlias ldap beta> AuthLDAPBindDN "CN=LDAPAuth,OU=Service Accounts,O=Beta" AuthLDAPBindPassword [[REDACTED]] AuthLDAPURL ldap://ldap.beta:3268/?sAMAccountName?sub? </AuthnProviderAlias> # Subversion Repository <Location /svn> DAV svn SVNPath /opt/svn/repo AuthName "Subversion" AuthType Basic AuthBasicProvider alpha beta AuthzLDAPAuthoritative off AuthzSVNAccessFile /opt/svn/authz require valid-user </Location> We're encountering issues with users that have accounts in both Alpha and Beta, especially when their accounts in Alpha are expired (but still present; company policy is that the accounts live on for at a minimum of 1 year). For example, when the user x (which has en expired account in Alpha, and a valid account in Beta), the Apache error log reports the following: [Tue May 11 13:42:07 2010] [debug] mod_authnz_ldap.c(377): [client 10.1.1.104] [14817] auth_ldap authenticate: using URL ldap://dc01.alpha:3268/?sAMAccountName?sub? [Tue May 11 13:42:08 2010] [warn] [client 10.1.1.104] [14817] auth_ldap authenticate: user x authentication failed; URI /svn/ [ldap_simple_bind_s() to check user credentials failed][Invalid credentials] [Tue May 11 13:42:08 2010] [error] [client 10.1.1.104] user x: authentication failure for "/svn/": Password Mismatch [Tue May 11 13:42:08 2010] [debug] mod_deflate.c(615): [client 10.1.1.104] Zlib: Compressed 527 to 359 : URL /svn/ Attempting to authenticate as a non-existant user (nobodycool) results in the correct behavior of querying both LDAP servers: [Tue May 11 13:42:40 2010] [debug] mod_authnz_ldap.c(377): [client 10.1.1.104] [14815] auth_ldap authenticate: using URL ldap://dc01.alpha:3268/?sAMAccountName?sub? [Tue May 11 13:42:40 2010] [warn] [client 10.1.1.104] [14815] auth_ldap authenticate: user nobodycool authentication failed; URI /svn/ [User not found][No such object] [Tue May 11 13:42:40 2010] [debug] mod_authnz_ldap.c(377): [client 10.1.1.104] [14815] auth_ldap authenticate: using URL ldap://ldap.beta:3268/?sAMAccountName?sub? [Tue May 11 13:42:44 2010] [warn] [client 10.1.1.104] [14815] auth_ldap authenticate: user nobodycool authentication failed; URI /svn/ [User not found][No such object] [Tue May 11 13:42:44 2010] [error] [client 10.1.1.104] user nobodycool not found: /svn/ [Tue May 11 13:42:44 2010] [debug] mod_deflate.c(615): [client 10.1.1.104] Zlib: Compressed 527 to 359 : URL /svn/ How do I configure Apache to correctly query Beta if it encounters an expired account in Alpha?

    Read the article

  • Blade Enclosure, Multiple Blade Servers, Whats the closest approximation to a DMZ?

    - by codeulike
    I appreciate that to get a proper DMZ, one should have a physical separation between the DMZ servers and the LAN servers, with a firewall server in between. But, in a network consisting of a single Blade Enclosure containing two or more Blade servers, whats the closest approximation to a DMZ that could be designed? More details: Virtual servers, mostly Windows, running in a VMWare environment on the Blade servers, and physical firewall box between the Blade enclosure and the internet.

    Read the article

  • Forward differing hostnames to different internal IPs through NAT router

    - by abrereton
    Hi, I have one public IP address, one router and multiple servers behind the router. I would like to forward differing domains (All using HTTP) through the router to different servers. For example: example1.com => 192.168.0.110 example2.com => 192.168.0.120 foo.example2.com => 192.168.0.130 bar.example2.com => 192.168.0.140 I understand that this could be accomplished using Port Forwarding, but I need all hosts running on port 80. I found some information about IP Masquerading, but I found this difficult to understand, and I am not sure if it is what I am after. Another solution I have found is to direct all traffic to Reverse Proxy server, which forwards the requests onto the appropriate server. What about iptables? I am using a Billion 7404 VNPX router. Is there a feature that this router has that can accomplish this? Are these my only options? Have I missed something completely? Is one recommended over the others? I have searched around but I don't think I am hitting the correct keywords. Thanks in advance.

    Read the article

  • How to divert traffic based on hostname using HAProxy?

    - by Bosky
    I've had some initial success with HAProxy setting up a bunch of app servers listening on various other ports. I now have another webserver listening on one port, and i'd like to what changes to make to my config to flow traffic by hostname as well. The following is the current setup, assuming: my apache webserver is running at examplecom:8001 my bunch of app servers 0.0.0.0:8081, 0.0.0.0:8082 , 0.0.0.0:8083 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 debug #quiet #user haproxy #group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen appservers 0.0.0.0:80 mode http balance roundrobin option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 server inst1 0.0.0.0:8081 cookie server01 check inter 2000 fall 3 server inst2 0.0.0.0:8082 cookie server02 check inter 2000 fall 3 server inst3 0.0.0.0:8083 cookie server01 check inter 2000 fall 3 server inst4 0.0.0.0:8084 cookie server02 check inter 2000 fall 3 capture cookie vgnvisitor= len 32 (any other comments on the ^ setup are welcome.) Now I'd like to continue the same above, but in addition in case - if the hostname is myspecialtopleveldomain<dot>com, then would like to flow traffic to example<dot>com:8001 ~B

    Read the article

  • 3 Servers, is this is a cluster?

    - by Andy Barlow
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • Threads, Sockets, and Designing Low-Latency, High Concurrency Servers

    - by lazyconfabulator
    I've been thinking a lot lately about low-latency, high concurrency servers. Specifically, http servers. http servers (fast ones, anyway) can serve thousands of users simultaneously, with very little latency. So how do they do it? As near as I can tell, they all use events. Cherokee and Lighttpd use libevent. Nginx uses it's own event library performing much the same function of libevent, that is, picking a platform optimal strategy for polling events (like kqueue on *bsd, epoll on linux, /dev/poll on Solaris, etc). They all also seem to employ a strategy of multiprocess or multithread once the connection is made - using worker threads to handle the more cpu intensive tasks while another thread continues to listen and handle connections (via events). This is the extent of my understanding and ability to grok the thousand line sources of these applications. What I really want are finer details about how this all works. In examples of using events I've seen (and written) the events are handling both input and output. To this end, do the workers employ some sort of input/output queue to the event handling thread? Or are these worker threads handling their own input and output? I imagine a fixed amount of worker threads are spawned, and connections are lined up and served on demand, but how does the event thread feed these connections to the workers? I've read about FIFO queues and circular buffers, but I've yet to see any implementations to work from. Are there any? Do any use compare-and-swap instructions to avoid locking or is locking less detrimental to event polling than I think? Or have I misread the design entirely? Ultimately, I'd like to take enough away to improve some of my own event-driven network services. Bonus points to anyone providing solid implementation details (especially for stuff like low-latency queues) in C, as that's the language my network services are written in.

    Read the article

  • Help, my CentOS servers keep going down , No route to host after a random uptime [closed]

    - by user249071
    Hello , I have a couple of Centos linux servers, that have a very simple task, they run nginx + fastcgi for php , and some NFS mounts between them, readonly They have some RPC commands to start some downloading processes with wget, nothing fancy , from a main server, but their behavior is very unstable, they simply go down, we tried to monitor ram , processor usage, even network connections, they don't load up so much, max network connections up to... 250 max, 15% processor usage and memory , well, doesn't even fill up, 2.5GB from 8GB max , I have no ideea why can a linux server go down like that, they aren't even public servers, no domain names installed no public serving, for sites. The only thing that I've discovered was that if i didn't restart the network service every couple of hours or so... the servers were becoming very slow, starting apps very slow, but not repoting a high usage of resources...Maybe Centos doesn't free the timeout connections, or something like that...It's based on Red Hat right? I'm not a linux expert , but I'm sure that there are a few guys out there that can easily have an answer to this , or even have some leads to what i can do ... I haven't installed snort, or other things to view if we have some DOS attacks, still the scheduled script that restarts the network each hour should put the system back online, and it doesn't.... Thank you in advance

    Read the article

  • ubuntu/apt-get update said "Failed to Fetch http:// .... 404 not found"

    - by lindenb
    Hi all, I'm trying to run apt-get update on ubuntu 9.10 I've configured my proxy server and I can access the internet without any problem: /etc/apt# wget "http://www.google.com" Resolving (...) Proxy request sent, awaiting response... 200 OK Length: 292 [text/html] Saving to: `index.html' 100%[=================================================================================================================================>] 292 --.-K/s in 0s 2010-04-02 17:20:33 (29.8 MB/s) - `index.html' saved [292/292] But when I tried to use apt-get I got the following message: Ign http://archive.ubuntu.com karmic Release.gpg Ign http://ubuntu.univ-nantes.fr karmic Release.gpg Ign http://ubuntu.univ-nantes.fr karmic/main Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic/restricted Translation-en_US Ign http://archive.ubuntu.com karmic Release Ign http://ubuntu.univ-nantes.fr karmic/multiverse Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic/universe Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic-updates Release.gpg Ign http://archive.ubuntu.com karmic/main Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/main Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic-updates/restricted Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic-updates/multiverse Translation-en_US Ign http://archive.ubuntu.com karmic/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/universe Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic-security Release.gpg Ign http://archive.ubuntu.com karmic/main Sources Ign http://ubuntu.univ-nantes.fr karmic-security/main Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic-security/restricted Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic-security/multiverse Translation-en_US Ign http://archive.ubuntu.com karmic/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic-security/universe Translation-en_US Ign http://ubuntu.univ-nantes.fr karmic Release Err http://archive.ubuntu.com karmic/main Sources 404 Not Found Ign http://ubuntu.univ-nantes.fr karmic-updates Release Ign http://ubuntu.univ-nantes.fr karmic-security Release Err http://archive.ubuntu.com karmic/restricted Sources 404 Not Found Ign http://ubuntu.univ-nantes.fr karmic/main Packages Ign http://ubuntu.univ-nantes.fr karmic/restricted Packages Ign http://ubuntu.univ-nantes.fr karmic/multiverse Packages Ign http://ubuntu.univ-nantes.fr karmic/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic/main Sources Ign http://ubuntu.univ-nantes.fr karmic/universe Sources Ign http://ubuntu.univ-nantes.fr karmic/universe Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/main Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/restricted Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/multiverse Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/main Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/universe Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/universe Packages Ign http://ubuntu.univ-nantes.fr karmic-security/main Packages Ign http://ubuntu.univ-nantes.fr karmic-security/restricted Packages Ign http://ubuntu.univ-nantes.fr karmic-security/multiverse Packages Ign http://ubuntu.univ-nantes.fr karmic-security/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic-security/main Sources Ign http://ubuntu.univ-nantes.fr karmic-security/universe Sources Ign http://ubuntu.univ-nantes.fr karmic-security/universe Packages Ign http://ubuntu.univ-nantes.fr karmic/main Packages Ign http://ubuntu.univ-nantes.fr karmic/restricted Packages Ign http://ubuntu.univ-nantes.fr karmic/multiverse Packages Ign http://ubuntu.univ-nantes.fr karmic/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic/main Sources Ign http://ubuntu.univ-nantes.fr karmic/universe Sources Ign http://ubuntu.univ-nantes.fr karmic/universe Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/main Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/restricted Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/multiverse Packages Ign http://ubuntu.univ-nantes.fr karmic-updates/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/main Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/universe Sources Ign http://ubuntu.univ-nantes.fr karmic-updates/universe Packages Ign http://ubuntu.univ-nantes.fr karmic-security/main Packages Ign http://ubuntu.univ-nantes.fr karmic-security/restricted Packages Ign http://ubuntu.univ-nantes.fr karmic-security/multiverse Packages Ign http://ubuntu.univ-nantes.fr karmic-security/restricted Sources Ign http://ubuntu.univ-nantes.fr karmic-security/main Sources Ign http://ubuntu.univ-nantes.fr karmic-security/universe Sources Ign http://ubuntu.univ-nantes.fr karmic-security/universe Packages Err http://ubuntu.univ-nantes.fr karmic/main Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic/restricted Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic/multiverse Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic/restricted Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic/main Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic/universe Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic/universe Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-updates/main Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-updates/restricted Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-updates/multiverse Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-updates/restricted Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-updates/main Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-updates/universe Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-updates/universe Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-security/main Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-security/restricted Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-security/multiverse Packages 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-security/restricted Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-security/main Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-security/universe Sources 404 Not Found Err http://ubuntu.univ-nantes.fr karmic-security/universe Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/karmic/main/source/Sources.gz 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/karmic/restricted/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic/main/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic/restricted/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic/multiverse/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic/restricted/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic/main/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic/universe/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic/universe/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-updates/main/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-updates/restricted/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-updates/multiverse/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-updates/restricted/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-updates/main/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-updates/universe/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-updates/universe/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-security/main/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-security/restricted/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-security/multiverse/binary-i386/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-security/restricted/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-security/main/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-security/universe/source/Sources.gz 404 Not Found W: Failed to fetch http://ubuntu.univ-nantes.fr/ubuntu/dists/karmic-security/universe/binary-i386/Packages.gz 404 Not Found apt.conf However I can 'see' those files with firefox. more /etc/apt/apt.conf Acquire::http::proxy "http://www.myproxyname.fr:3128"; I also tried with port '80', or with a blank /etc/apt/apt.conf source.list grep -v "#" /etc/apt/sources.list deb http://ubuntu.univ-nantes.fr/ubuntu/ karmic main restricted multiverse deb http://ubuntu.univ-nantes.fr/ubuntu/ karmic-updates main restricted multiverse deb http://ubuntu.univ-nantes.fr/ubuntu/ karmic universe deb http://ubuntu.univ-nantes.fr/ubuntu/ karmic-updates universe deb http://ubuntu.univ-nantes.fr/ubuntu/ karmic-security main restricted multiverse deb http://ubuntu.univ-nantes.fr/ubuntu/ karmic-security universe does anyone knows how to fix this ? Thanks Pierre

    Read the article

  • Accessing Squid Proxy over internet

    - by prateekdayal
    Hi, I recently finished installing Squid on a VPS I have in the US and its working fine locally (I verified by setting http_proxy variable and using lynx). I want to access this proxy over the internet (as an anonymizer) so that I can see how some ads show up for US traffic on my website. I have setup authentication so abuse is not a problem. However, I am not able to access the proxy over the internet. I have set the following rule in squid.conf http_access allow all Is this not possible to do what I want or I am missing something? The port 3128 is open in the firewall so that is not an issue. Squid is running on 0.0.0.0 Thanks Prateek

    Read the article

  • Safari can’t establish a secure connection to the server

    - by Haris
    I am using Mac OS X 10.5.8 behind a company firewall and have proxy settings and username / password through which I can connect to internet. The internet is working as I am posting this question through it, but if I try to open Facebook or Gmail the following message appears: Safari can’t open the page “https://www.google.com/accounts/ServiceLogin?[..]” because Safari can’t establish a secure connection to the server “www.google.com” What could be wrong?

    Read the article

  • Nginx: Can I cache a URL matching a pattern at a different URL?

    - by Josh French
    I have a site with some URLs that look like this: /prefix/ID, where /prefix is static and ID is unique. Using Nginx as a reverse proxy, I'd like to cache these pages at the /ID portion only, omitting the prefix. Can I configure Nginx so that a request for the original URL is cached at the shortened URL? I tried this (I'm omitting some irrelevant parts) but obviously it's not the correct solution: http { map $request_uri $page_id { default $request_uri; ~^/prefix/(?<id>.+)$ $id; } location / { proxy_cache_key $page_id } }

    Read the article

  • Difference ProxyPass and RewriteRule

    - by Wesho
    I just came across a case where ProxyPass (ProxyPassMatch to be exact) is being used in an Apache configuration file. This mod_proxy rule is being used to proxy from a whole cluster to one specific server, when a certain file is requested which only resides on that server. Now I'm a bit confused since I can't grasp why something like this cannot be achieved using a RewriteRule. So in essence I want to ask: What is the difference between ProxyPassMatch and a RewriteRule in this case?

    Read the article

  • How do I get rid of HTTP_CACHE_CONTROL header in squid 3?

    - by Arsen Zahray
    I'm trying to configure an anonymous proxy using squid. I've set forwarded_for delete via delete but Squid 3 still adds an other header to the web requests that go through it: HTTP_CACHE_CONTROL = max-age=259200 I've tried cache_control delete but that doesn't work. How do I get rid of squid's cache_control header? I don't want for it to interfere with the actual web requests that contain cache-control header; But I want for it not to attach its own header

    Read the article

  • AJP proxy that maps internal servlet name to a different external name

    - by sakra
    Using apache2 I want to set up an AJP proxy for a Tomcat server that maps an internal servlet URL to a completely different URL externally. Currently I am using the following configurations: Apache2 configuration: <IfModule mod_proxy.c> ProxyPreserveHost on ProxyPass /external_name ajp://192.168.1.30:8009/servlet_name ProxyPassReverse /external_name ajp://192.168.1.30:8009/servlet_name </IfModule> Note that external_name and servlet_name are different. Tomcat 6 configuration: <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> This however does not work. Apache seems to forward http requests to Tomcat. However the URLs and redirects returned by Tomcat are still using the original servlet_name and Apache does not map them to external_name. Is this possible at all with AJP? If not can it be done using a plain http proxy instead?

    Read the article

  • How to configure nginx so it works with Express?

    - by Michal Stefanow
    I'm trying to configure nginx so it proxy_pass requests to my node apps. Question on StackOverflow got many upvotes: http://stackoverflow.com/questions/5009324/node-js-nginx-and-now and I'm using config from there. (but since question is about server configuration it is supposed to be on ServerFault) Here is the nginx configuration: server { listen 80; listen [::]:80; root /var/www/services.stefanow.net/public_html; index index.html index.htm; server_name services.stefanow.net; location / { try_files $uri $uri/ =404; } location /test-express { proxy_pass http://127.0.0.1:3002; } location /test-http { proxy_pass http://127.0.0.1:3003; } } Using plain node: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(3003, '127.0.0.1'); console.log('Server running at http://127.0.0.1:3003/'); It works! Check: http://services.stefanow.net/test-http Using express: var express = require('express'); var app = express(); // app.get('/', function(req, res) { res.redirect('/index.html'); }); app.get('/index.html', function(req, res) { res.send("blah blah index.html"); }); app.listen(3002, "127.0.0.1"); console.log('Server running at http://127.0.0.1:3002/'); It doesn't work :( See: http://services.stefanow.net/test-express I know that something is going on. a) test-express is NOT running b) text-express is running (and I can confirm it is running via command line while ssh on the server) root@stefanow:~# service nginx restart * Restarting nginx nginx [ OK ] root@stefanow:~# curl localhost:3002 Moved Temporarily. Redirecting to /index.html root@stefanow:~# curl localhost:3002/index.html blah blah index.html I tried setting headers as described here: http://www.nginxtips.com/how-to-setup-nginx-as-proxy-for-nodejs/ (still doesn't work) proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; I also tried replacing '127.0.0.1' with 'localhost' and vice versa Please advise. I'm pretty sure I miss some obvious detail and I would like to learn more. Thank you.

    Read the article

  • Boost::Thread or fork() : Multithreaded HTTP Proxy

    - by osmano807
    I'm testing boost::thread on a system. It happens that I needed to act as a fork(), because one thread modifies the other variables, even member variables of class I do the project using fork() or is there some alternative still using boost::thread? Basically I run this program in Linux and maybe FreeBSD. It is an http proxy,accept() in main thread, and a function that accepts a class (where there is the file descriptor socket) in a secondary thread that makes the service. Is there a better way to implement a proxy?

    Read the article

  • Using OpenVPN, yet netflix.com blocks access

    - by user837848
    I have set up an OpenVPN server on a VPS in the USA and configured it to route all clients traffic through it. Everything seems to work fine regarding the VPN connection in gerneral. All ip lookup sites show me the us server's ip address and even hulu.com works(it won't work if you are not in the usa). But for some reason netflix.com says "Sorry, Netflix is not available in your country yet.". So I thought that netflix probably uses some more sophisticated ways to determine your location beyond just your ip address. But I could not find a way to get it to work until I dropped the idea of using a VPN and instead connected to the server via a simple socks tunnel with ssh by running: ssh -D 9999 user@serverip All I had to do was changing the key network.proxy.socks_remote_dns in Firefox from false to true to prevent DNS leaks and setting up the socks proxy. Then I could finally watch netflix.com. As a result I concluded that there is nothing in the browser(or something like system timezone) that tells netflix the location, so it has to have something to do with the OpenVPN config. After that I used tcpdump to log all the traffic on the server's network interface venet0 (OpenVZ VPS), visited netflix.com on the client while first connected to the VPN and then connected via socks tunnel and afterwards compared both outputs. The only thing that caught my eye was that while using the socks tunnel the server mainly used ipv6 to connect to netflix whereas it only used ipv4 when the client was connected to the OpenVPN server. But I don't get how that could make such a difference. So what am I missing? Is there a way to configure OpenVPN to also use ipv6 to connect to a website although there is only an ipv4 connection between the VPS and the client? Here is the server.conf of the OpenVPN server (OpenVZ VPS) local serverip port 443 proto tcp dev tun ca ./easy-rsa2/keys/ca.crt cert ./easy-rsa2/keys/vps1.crt key ./easy-rsa2/keys/vps1.key # This file should be kept secret dh ./easy-rsa2/keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" client-to-client keepalive 10 120 tls-auth ta.key 0 # This file is secret cipher AES-256-CBC comp-lzo max-clients 4 user nobody group nogroup persist-key persist-tun status openvpn-status.log log-append openvpn.log verb 3 iptables forwarding iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j SNAT --to-source serverip (enabled ipv4 forwarding) I have tried everything always on a Win7 and a Debian client with only ipv4 connections and always made sure that they use the correct DNS server (tested with ipleak.net and tcpdump / wireshark). client.conf: client dev tun proto tcp remote serverip 443 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client.crt key client.key ns-cert-type server tls-auth ta.key 1 cipher AES-256-CBC comb-lzo verb 3

    Read the article

  • Make my IP address appear to be from another country

    - by Brian
    How do I make it appear that my IP address is coming from one country while I'm located in another? I live in Germany and some websites (like Hulu or Youtube) don't work because my IP isn't in the US. How do I get around this? Do I have to use a proxy or something? Moderator note Super User does not endorse nor defend any activity which may be used to circumvent local/state/national laws.

    Read the article

  • Forefront TMG 2010: Can you monitor realtime TCP connections and bandwidth on a per-user basis?

    - by user65235
    I'm just starting a trial of ForeFront TMG to use as a proxy server. I know I can get a real time activity monitor and filter on a per user basis, but would like to be able to get a real time activity monitor of all users that I can then sort by bandwidth consumed (enabling me to get a view on who the bandwidth hogs are). Does anyone know if this is possible in Forefront TMG or if a third party product is required? Thanks. JR

    Read the article

  • windows http tunnel trough 2 linux hosts?

    - by Darkmage
    the localhost only have connection to host1, Host1 have connetion to Host2 and localhost, how can i setup this to use host2 as a proxy for web trafic from localhost. i have seen similar topics but cant get it to work. how do i set it up on the XP client?

    Read the article

  • Allow run-time configuration of web service url using ATL soap and sproxy-generated proxy class

    - by Odrade
    I have a Visual C++ application that communicates with an ASP.NET web service via ATL Soap. The client application uses an sproxy-generated proxy class for the communication. Looking at the generated proxy class, I noticed that the url for the web service is hard-coded in numerous places. It would be preferable for the url to be configurable at run-time (e.g. stored in a config file). Could anyone recommend a method for doing this? It doesn't look like the class generated by sproxy is amenable to hand-editing.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >