Search Results

Search found 8288 results on 332 pages for 'proxy models'.

Page 169/332 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • OS X Apache giving 503 error for anything in /api directory

    - by WilliamMayor
    I have a locally hosted website that uses Smarty templates, I'm trying to get started on building an API for the site. I've used virtualhost.sh to create a local virtual host for this and other sites. I've discovered that if I put a directory called api at the root of any of these virtual hosts I will get a 503 error when I try to access anything inside. I am using mod-rewrite but so far only to append a .php extension when needed. Here are the error logs for a request: [Thu Feb 09 13:42:37 2012] [error] proxy: HTTP: disabled connection for (localhost) [Thu Feb 09 13:49:06 2012] [error] (61)Connection refused: proxy: HTTP: attempt to connect to [fe80::1]:8080 (localhost) failed [Thu Feb 09 13:49:06 2012] [error] ap_proxy_connect_backend disabling worker for (localhost) The middle line gave me a clue to look in my hosts file because why would a request go to [fe80::1]:8080? I commented out that line and tried again, this time the error was in connecting to the standard 127.0.0.1 localhost. I have concluded that perhaps there is some config file somewhere picking up the underlying request of localhost/api and pointing it somewhere other than my virtual host. At this point my ability to fix the problem fails me. Can anyone help?

    Read the article

  • Single application through OpenVPN tunnel (Debian Lenny)

    - by mikael
    I'm using Debian Lenny and I want to tunnel rtorrent only through a OpenVPN tunnel. I have a tunnel running, the config file looks like this: client dev tun proto udp remote openvpn.xxx.com 1194 resolv-retry infinite nobind persist-key persist-tun ca /etc/openvpn/xxx/keys/ca.crt cert /etc/openvpn/xxx/keys/client.crt key /etc/openvpn/xxx/keys/client.key tls-auth /etc/openvpn/xxx/keys/tls.key 1 ns-cert-type server comp-lzo verb 3 auth-user-pass script-security 3 reneg-sec 0 My idea is that I could run a sockd proxy internally that redirects traffic to the openvpn tunnel. I could use the *nix "proxifier" application "tsocks" to make it possible for rtorrent to connect through that proxy (as rtorrent doesn't support proxies). I have trouble configuring sockd as my IP inside the VPN changes every time I connect. This is a config file someone said would help: http://ircpimps.org/sockd.conf As my IP changes at each connect I don't know what to put in that config file. I have no control over the host side config file. Any help wanted. Any other method is very welcome.

    Read the article

  • Forward Apache to Django dev server

    - by Alex Jillard
    I'm trying to get apache to forward all requests on port 80 to 127.0.0.1:8000, which is where the django dev server runs. I think I have it forwarding properly, but there must be an issue with 127.0.0.1:8000 not being run by apache? I'm running the django dev server in an ubuntu vmware instance, and I'd other people in the office to see the apps in development without having to promote anything to our actual dev/staging servers. Right now the virtual machine picks up an IP for itself, and when I point a browser to that url with the defualt apache config, I get the default apache page. I've since changed the httpd.conf file to the following to try and get it to forward the requests to the django dev server: ServerName localhost <Proxy *> Order deny,allow Allow from all </Proxy> <VirtualHost *> ServerName localhost ServerAdmin [email protected] ProxyRequests off ProxyPass * http://127.0.0.1:8000 </VirtualHost> All I get are 404s with this, and in error.log I get the following (192.168.1.101 is the IP of my computer 192.168.1.142 is the IP of the virtual machine): [Mon Mar 08 08:42:30 2010] [error] [client 192.168.1.101] File does not exist: /htdocs

    Read the article

  • Routing a single request through multiple nginx backend apps

    - by Jonathan Oliver
    I wanted to get an idea if anything like the following scenario was possible: Nginx handles a request and routes it to some kind of authentication application where cookies and/or other kinds of security identifiers are interpreted and verified. The app perhaps makes a few additions to the request (appending authenticated headers). Failing authentication returns an HTTP 401. Nginx then takes the request and routes it through an authorization application which determines, based upon identity and the HTTP verb (put, delete, get, etc.) and URL in question, whether the actor/agent/user has permission to performed the intended action. Perhaps the authorization application modifies the request somewhat by appending another header, for example. Failing authorization returns 403. (Wash, rinse, repeat the proxy pattern for any number of services that want to participate in the request in some fashion.) Finally, Nginx routes the request into the actual application code where the request is inspected and the requested operations are executed according to the URL in question and where the identity of the user can be captured and understood by the application by looking at the altered HTTP request. Ideally, Nginx could do this natively or with a plugin. Any ideas? The alternative that I've considered is having Nginx hand off the initial request to the authentication application and then have this application proxy the request back through to Nginx (whether on the same box or another box). I know there are a number of applications frameworks (Django, RoR, etc.) that can do a lot of this stuff "in process", but I was trying to make things a little more generic and self contained where different applications could "hook" the HTTP pipeline of Nginx and then participate in, short circuit, and even modify the request accordingly. If Nginx can't do this, is anyone aware of other web servers that will perform in the manner described above?

    Read the article

  • another "SSH connect to host github.com port 22: Bad file number"

    - by Mariusz
    Hello. I have a problem with my first-time ssh connection. Yes, I've already done yours guides, already tried your "Dealing with firewalls and proxies" article and the problem is still occuring. I am using Win7 32bit, Windows Firewall is disabled, haven't any third-party firewalls, ESET Nod32 Antivirus is not blocking any ports, I am not using any PROXY (neither local proxy) . Here goes the logs: Ordinary SSH connection try C:\Users\Mariusz>ssh -vvv [email protected] OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug2: ssh_connect: needpriv 0 debug1: Connecting to github.com [207.97.227.239] port 22. debug1: connect to address 207.97.227.239 port 22: Not owner ssh: connect to host github.com port 22: Bad file number NCAT connection try C:\Users\Mariusz>ncat github.com 22 Strange connect error from 207.97.227.239 (10013): No error 10013 = WSAEACCES I think that method called "smart-http-support" won't be usable for me because I haven't created repo yet. I have just GIT INIT locally, and finished at step GIT PUSH, which returns the same: ssh: connect to host github.com port 22: Bad file number fatal: The remote end hung up unexpectedly corkscrew method (first article from yours guide) . While PUTTYing (with pageant in bg), after inputing login - an error is occuring (MessageBox): Disconnected: No supported authentication methods available And in terminal such message is printing out: Server refused our key Key I have generated correctly, using ssh-keygen. I tried not method by editing ~/.ssh/config yet because I had thought that because I haven't PUSHed anything to my remote repo so I won't be able to CLONE anything. Method called ssh-forwarding is not for my, cause it "requires access to an external ssh server" and I haven't any at this time. What else could I do? Thanks in advance for any help. Mariusz.

    Read the article

  • How to remap IPs visible from local machine to IPs visible from a machine I have SSH access to?

    - by gooli
    I'm so far out of my depth I don't even know what to google for. There's a server I can connect to via SSH. Via that server I can access other server on its subnet via SSH. What I want to do is be able to access the machines that server has access to directly. Say the server IP is 192.168.7.7 and is the only one in the 192.168.x.x range I have access to. I'd like to configure things in such a way that when I to access say 192.168.7.100 on my machine, the connection will go through an SSH tunnel I open to 192.168.7.7 and out to 192.168.7.100. I would like this to work for any port if at all possible. I know I can set an HTTP proxy and even a SOCKS proxy, but I'm wondering is there is a way to actually remap some of the IP my machine sees to IP only visible from the remote machine. What would this configuration be called? IS this NAT, VPN, IP2IP or something else? How can I set up this on a Windows client box that connects via SSH to a Linux box? Sounds to me like I need to set up some kind of filtering on the network driver or possibly a virtual NIC, but I'm not sure where to go next.

    Read the article

  • Setting up Apache with multiple virtual host when using Plone 4.1

    - by Shaun Owens
    I have a Plone server running on CentOS, I have multiple instances of Plone running 4.0 and 4.1, I also have multiple sites. I am new to linux and haveing problems getting Apache to work with multiple virtuale hosts. The first host listed works just fine but the second host does not. I get the following error message when I start HTTPD: Starting httpd: [Mon Nov 07 14:38:31 2011] [warn] VirtualHost ordevel3.ucdavis.edu:80 overlaps with VirtualHost ordevel4.ucdavis.edu:80, the first has precedence, perhaps you need a NameVirtualHost directive. What am I missing to get the virtual hosts to work correctly? Below in my syntax in httpd.conf. <VirtualHost ordevel3.abc.edu:80> ServerAlias ordevel3.abc.edu ServerAdmin [email protected] ServerSignature On <IfModule mod_rewrite.c> RewriteEngine On # serving icons from apache 2 server RewriteRule ^/icons/ - [L] RewriteRule ^/(.*) \ http://localhost:8080/VirtualHostBase/http/%{SERVER_NAME}:80/itsdevel3/VirtualHostRoot/$1 [L,P] </IfModule> <IfModule mod_proxy.c> ProxyVia On # prevent the webserver from beeing used as proxy <LocationMatch "^[^/]"> Deny from all </LocationMatch> </IfModule> </VirtualHost> <VirtualHost ordevel4.abc.edu:80> ServerAlias ordevel4.abc.edu ServerAdmin [email protected] ServerSignature On <IfModule mod_rewrite.c> RewriteEngine On # serving icons from apache 2 server RewriteRule ^/icons/ - [L] RewriteRule ^/(.*) \ http://localhost:8180/VirtualHostBase/http/%{SERVER_NAME}:80/ITS/VirtualHostRoot/$1 [L,P] </IfModule> <IfModule mod_proxy.c> ProxyVia On # prevent the webserver from beeing used as proxy <LocationMatch "^[^/]"> Deny from all </LocationMatch> </IfModule> </VirtualHost>

    Read the article

  • performance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Using Confluence with virtual hosts and mod_proxy

    - by Marcus
    Hi @all, on a test server I have installed the latest version of Confluence. I configured a apache with ajp. But I have a problem, when I login in Confluence, I get the following error message: Not Found The requested URL / / homepage.action was not found on this server. The problem seems to be known, I found following Link: http://confluence.atlassian.com/display/DOC/Using+Apache+with+virtual+hosts+and+mod_proxy But unfortunately the forwards have not helped, I still get the error messages. Does anyone have any idea how I could solve the problem? The following Apache configuration I have set up: LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so LoadModule proxy_ajp_module /usr/lib/apache2/modules/mod_proxy_ajp.so <IfModule proxy_http_module> ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> <Location /> Order allow,deny Allow from all </Location> ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ </IfModule>

    Read the article

  • Managing multiple Apache proxies simultaneously (mod_proxy_balancer)

    - by Hank
    The frontend of my web application is formed by currently two Apache reverse proxies, using mod_proxy_balancer to distribute traffic over a number of backend application servers. Both frontend reverse proxies, running on separate hosts, are accessible from the internet. DNS round robin distributes traffic over both. In the future, the number of reverse proxies is likely to grow, since the webapplication is very bandwidth-heavy. My question is: how do I keep the state of both reverse balancers / proxies in sync? For example, for maintenance purposes, I might want to reduce the load on one of the backend appservers. Currently I can do that by accessing the Balancer-Manager web form on each proxy, and change the distribution rules. But I have to do that on each proxy manually and make sure I enter the same stuff. Is it possible to "link" multiple instances of mod_proxy_balancer? Or is there a tool out there that connects to a number of instances, and updates all with the same information? Update: The tool should retrieve the runtime status and make runtime changes, just like the existing Balancer-Manager, only for a number of proxies - not just for one. Modification of configuration files is not what I'm interested in (as there are plenty tools for that).

    Read the article

  • Conditionally permitting HTTP-only requests to Tomcat?

    - by Mike
    I have 2 versions of a system: Tomcat webserver Nginx reverse-proxy sitting in front of a tomcat webserver. In version 2, nginx only ever talks to Tomcat over HTTP. A user could configure the system so that only HTTPS requests are allowed. If the user does this in Version 1 and then the XML configuration files for Tomcat takes care of this. In version 2, nginx takes care of this. The problem is this: I cannot force a user to update their Tomcat XML config files when they upgrade from version 1 to version 2 (it will be recommended that they do so) because this is done as part of a larger process. This means that if they upgrade and don't update the Tomcat config, an HTTPS request will arrive at nginx, which will proxy it over HTTP to Tomcat which will reject the request because it is not HTTPS. So I can't force an update to the Tomcat XML, and I have to use HTTP between nginx and Tomcat. Any ideas? Is there some way I can affect how Tomcat reads its config in Version 2 so that it ignores the HTTPS-only section?

    Read the article

  • nginx: SSI working on Apache backend, but not on gunicorn backend

    - by j0nes
    I have nginx in front of an Apache server and a gunicorn server for different parts of my website. I am using the SSI module in nginx to display a snippet in every page. The websites include a snippet in this form: For static pages served by nginx everything is working fine, the same goes for the Apache-generated pages - the SSI include is evaluated and the snippet is filled. However for requests to my gunicorn backend running a Python app in Django, the SSI include does not get evaluated. Here is the relevant part of the nginx config: location /cgi-bin/script.pl { ssi on; proxy_pass http://default_backend/cgi-bin/script.pl; include sites-available/aspects/proxy-default.conf; } location /directory/ { ssi on; limit_req zone=directory nodelay burst=3; proxy_pass http://django_backend/directory/; include sites-available/aspects/proxy-default.conf; } Backends: upstream django_backend { server dynamic.mydomain.com:8000 max_fails=5 fail_timeout=10s; } upstream default_backend { server dynamic.mydomain.com:80; server dynamic2.mydomain.com:80; } proxy_default.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; What is the cause for this behaviour? How can I get SSI includes working for my pages generated on gunicorn? How can I debug this further?

    Read the article

  • preformance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • iptables port forward + nginx redirect problem

    - by easthero
    Here is my network browser = proxy(iptables port forward) = nginx server proxy: 192.168.10.204, forward 192.168.10.204:22080 to 192.168.10.10:80 nginx server: 192.168.10.10 nginx version:0.7.65 debian testing in nginx settings, I set: server_name _; server_name_in_redirect off; because my server has no domain now, access 192.168.10.10/index.html or 192.168.10.10/foobar is ok then access 192.168.10.204:22080/index.html is ok but access 192.168.10.204:22080/foobar, nginx 301 redirect to http://192.168.10.204/foobar how to fix? thanks telnet 192.168.10.204 22080 Trying 192.168.10.204... Connected to 192.168.10.204. Escape character is '^]'. GET /index.html HTTP/1.1 Host: 192.168.10.10 HTTP/1.1 200 OK Server: nginx/0.7.65 Date: Fri, 28 May 2010 10:07:29 GMT Content-Type: text/html Content-Length: 12 Last-Modified: Fri, 28 May 2010 07:25:12 GMT Connection: keep-alive Accept-Ranges: bytes hello world telnet 192.168.10.204 22080 Trying 192.168.10.204... Connected to 192.168.10.204. Escape character is '^]'. GET /test2 HTTP/1.1 Host: 192.168.10.10 HTTP/1.1 301 Moved Permanently Server: nginx/0.7.65 Date: Fri, 28 May 2010 10:04:20 GMT Content-Type: text/html Content-Length: 185 Location: http://192.168.10.10/test2/ Connection: keep-alive <html> <head><title>301 Moved Permanently</title></head> <body bgcolor="white"> <center><h1>301 Moved Permanently</h1></center> <hr><center>nginx/0.7.65</center> </body> </html>

    Read the article

  • Connecting jconsole using SOCKS to Amazon EC2

    - by freshfunk
    I'm trying to use jconsole to view stats on an EC2 instance by using a socks proxy created by SSH. I've tried the various scripts mentioned in the links below but to no avail: http://simplygenius.com/2010/08/jconsole-via-socks-ssh-tunnel.html http://gabrielcain.com/blog/2010/11/02/using-ssh-proxying-to-connect-jconsole-to-remote-cassandra-instances/ I'm running ssh -f -ND 8123 myuser@mymachine and verified that at least Firefox goes through it as a proxy. I then run jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=8123 service:jmx:rmi:///jndi/rmi://ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com:8080/jmxrmi I run netstat -n on my EC2 instance and I see a connection created by my machine. However, the connection eventually disappears and I get a 'channel 2: open failed: connect failed: Operation timed out' from my ssh tunnel. I've opened the jmx port through the security group and I've checked the port on the EC2 instance to make sure it's open (by telnet-ing to it). I'm not sure where to look next. Are there some properties in sshd_config or ssh_config I need to enable for tunneling? Or anything in Mac OS X? I feel like a serious noob but sys administration is really not my strong point. I've spent several hours and can't get this to work.

    Read the article

  • mod_rpaf with apache error_log

    - by Camden S.
    I'm using mod-rpaf with Apache 2.4 and it's working properly (showing the real client IP's) in my Apache access_log... but not in my error_log. My error log just shows the client IP address of the proxy server (my load balancer in this case) Here's an example of what I see in my error_log where 123.123.123.123 is the IP of my load balancer/proxy. == /usr/local/apache2/logs/error_log <== [Tue Jun 05 20:24:31.027525 2012] [access_compat:error] [pid 9145:tid 140485731845888] [client 123.123.123.123:20396] AH01797: client denied by server configuration: /wwwroot/private/secret.pdf The exact same request produces the following in my access_log where 456.456.456.456 is a real client IP (not the IP of the load balancer). 456.456.456.456 - - [05/Jun/2012:20:24:31 +0000] "GET /wwwroot/private/secret.pdf HTTP/1.1" 403 228 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:12.0) Gecko/20100101 Firefox/12.0" Here's my httpd.conf entry: # RPAF LoadModule rpaf_module modules/mod_rpaf-2.0.so RPAFenable On RPAFproxy_ips 127.0.0.1 123.123.123.123 RPAFsethostname On RPAFheader X-Forwarded-For What do I need to do to get the real IP addresses showing in my Apache error_log?

    Read the article

  • picking a linux compatable motherboard

    - by Chris
    Last time I bought a new computer (I build them myself) I got a motherboard that had really poor linux support for a long time. Specifically the audio. I had to wait months before the kernel supported the on board audio chipset. That is exactly the situation I'm trying to avoid this time around. I have some specific questions about "server motherboards" actually. I looked at a few models of server motherboards by intel, and some random models on newegg. I wasn't able to see much of a difference from regular desktop motherboard other than most had two sockets, and support for much more ram. These boards seem more popular with Linux users. Why? AMD and Intel both have server CPUs as well. Some question, what's the difference? To make this question more concrete, I was looking at this this motherboard. The main questions about it that I can't answer are: Can I get a motherboard without on board raid and audio? I wanted to get a hardware raid controller and a PCI audio card. I thought a server motherboard would be cheaper and not have these "extras", since who wants an audio card on a server? Where can I found out about Linux support for the components on this board? "Intel ICH10R", "Realtek ALC889", "Marvell 88E8056" I'm buying this computer to work as a Linux desktop for a lot of compiling, coding and audio/video work, but I don't want to rule out the possibility of installing windows and playing some games at one point. (even if the last game I got has been sitting in its box unopened for almost a year). Is it a good idea to buy a "server motherboard" and play games on it, or are desktop boards better value for this? The ultimate solution for me would be a motherboard that had GPL divers for onboard LAN, a single CPU socket, lots of PCI express and PCI. USB 3.0, and no fancy hard disk controllers since I'll be getting a separate one.

    Read the article

  • Bypassing SQUID on freebsd with PF

    - by epema
    I have PF+SQUID31 on FREEBSD-9.0, and I want to have some hosts(aka goodguys) to bypass the proxy, so that torrents are not logged. Also, I am not sure about transparent. It means that I dont have to configure proxy settings on the client side right? I have tried doing a redirect no rdr on $int_if inet proto {tcp,udp} from 192.168.1.233/32 to any However, no luck :( Here is a quick look of my conf files: SQUID /usr/local/etc/squid/squid.conf http_port 192.168.1.1:8080 transparent RC /etc/rc.conf: gateway_enable="YES" pf_enable="YES" pf_rules="/usr/local/etc/pf.conf" pflog_enable="YES" squid_enable="YES" I have squid31 installed from ports with SQUID_PF "Enable transparent proxying with PF" on PF /usr/loca/etc/pf.conf: int_if="re0" ext_if="bge0" localnet="{ 192.168.1.0/24 }" table <goodguys> const { "192.168.1.219", "192.168.1.233" } set block-policy drop set skip on lo0 scrub in all fragment reassemble scrub out all random-id max-mss 1440 block in on $ext_if pass out on $ext_if keep state block in on $int_if pass in on $int_if inet proto tcp from $int_if:network to $int_if port 8080 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 21 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 22 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 53 keep state pass in on $int_if inet proto tcp from $int_if:network to any port { smtp, pop3 } keep state pass in on $int_if inet proto icmp from $int_if:network to $int_if keep state pass out on $int_if keep state What lines should I add in conf files? I am assuming that the problem is on the firewall(pf).

    Read the article

  • Vyatta internet connection + hosted site on same IP

    - by boburob
    Having a small issue setting up a vyatta. The company internet and two different websites are both on the same IP. Server 1 - Has websites hosted on ports 1000 and 3000 and also has a proxy server installed to provide internet connection to the domain Server 2 - Has a website hosted on ports 80 and 433 The vyatta is correctly natting the appropriate traffic to each server, and allowing the proxy to get internet traffic, however I have a problem getting to the websites hosted on these two servers inside the domain. I believe the problem is that the HTTP request is being sent with an IP, eg: 12.34.56.78. The request will reach the website and the server will attempt to send the request back to the IP, however this is the IP of the Vyatta, so it has nowhere else to go. I thought the solution would be something like this: rule 50 { destination { address 12.34.56.78 port 1000 } inbound-interface eth1 inside-address { address 10.19.2.3 } protocol tcp type destination } But this doesnt seem to do it! UPDATE I changed the rules to the following: rule 50 { destination { address 12.34.56.78 port 443 } outbound-interface eth1 protocol tcp source { address 10.19.2.3 } type masquerade } rule 51 { destination { address 12.34.56.78 port 443 } inbound-interface eth1 inside-address { address 10.19.2.2 } protocol tcp type destination } I am now seeing traffic going between the two with Wireshark, but the website will still fail to load.

    Read the article

  • Blocking HTTPS and P2P Traffic

    - by Genboy
    I have a Debian server running at the gateway level on a LAN. This runs squid for creating block lists of websites - for eg. blocking social networking on the LAN. Also uses iptables. I am able to do a lot of things with squid & iptables, but a few things seem difficult to achieve. 1) If I block facebook through their http url, people can still access https://www.facebook.com because squid doesn't go through https traffic by default. However, if the users set the gateway IP address as proxy on their web browser, then https is also blocked. So I can do one thing - using iptables drop all outgoing 443 traffic, so that people are forced to set proxy on their browser in order to browse any HTTPS traffic. However, is there a better solution for this. 2) As the number of blocked urls increase in squid, I am planning to integrate squidguard. However, the good squidguard lists are not free for commercial use. Anyone knows of a good squidguard list which is free. 3) Block yahoo messenger, gtalk etc. There are so many ports on which these Instant Messenger softwares work. You need to drop lots of outgoing ports in iptables. However, new ports get added, so you have to keep adding them. And even if your list of ports is current, people can still use the web version of gtalk etc. 4) Blocking P2P. Haven't been able to figure out how to do this till now.

    Read the article

  • Use of backreferences in fail2ban filters possible?

    - by Izzy
    From time to time, I see collections of suspect "File not found" errors in my Apache logs, basically using the pattern File does not exist: /var/www/file, referer: http://my.server.com/file In human terms: The file was not found, though it referenced here itself. A clear hacking attempt, as that's hardly possible (and the REQUEST_URIs often enough suggest the same). In my eyes a clear case for fail2ban – if I could get backreferences to work here: failregex = ^%(_apache_error_client)s File does not exist: /var/www(.+), referer: http://.+\1$ (Justin Case: above examples assume the DIRECTORY_ROOT of that webserver being /var/www) I googled for hours, searched the fail2ban wiki up and down – but nowhere I could find a statement concerning backreferences in its filters. Are they not supported, or did I do it the wrong way? Any hints how to make it work (except from "dirty hacks" like first sending the request to another fake url using mod-rewrite, and then catching on that (if anyone is interested, I can elaborate on that approach in an answer), or doing something similar using mod-security)? as an entire log line was requested: [Fri Nov 08 14:57:28 2013] [error] [client 50.67.234.213] File does not exist: /var/www/text/files.htm++++++++++++++++++++++++++Result:+using+proxy+27.34.142.47:9090;+no+post+sending+forms+are+found;, referer: http://www.myserver.com/text/files.htm++++++++++++++++++++++++++Result:+using+proxy+27.34.142.47:9090;+no+post+sending+forms+are+found; (sorry, logs were just switched, so this long candidate was the only one left currently; minor adjustments were made for privacy reasons)

    Read the article

  • CSS and JS files not being updated, supposedly because of Nginx Caching

    - by Alberto Elias
    I have my web app working with AppCache and I would like that when I modify my html/css/js files, and then update my Cache Manifest, that when the user accesses my web app, they will have an updated version of those files. If I change an HTML file, it works perfectly, but when I change CSS and JS files, the old version is still being used. I've been checking everything out and I think it's related to my nginx configuration. I have a cache.conf file that contains the following: gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; } And in default.conf I have my locations. I would like to have this caching working on all locations except one, how could I configure this? I've tried the following and it isn't working: location /dir1/dir2/ { root /var/www/dir1; add_header Pragma "no-cache"; add_header Cache-Control "private"; expires off; } Thanks

    Read the article

  • Node js server not responding outside localhost centos

    - by David Martinez
    I'm running a basic express server from CentOS but for some reason it is not responding outside of localhost, I have tried everything I have found on google but nothing works so far. This is my express server: app.listen(3000,"0.0.0.0"); If I do curl http://localhost:3000/ in the server it works fine. If I curl to the ip of the server it doesn't work. I already changed my iptables num target prot opt source destination 1 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 2 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 3 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 There is currently a apache server running on port 80 with no problems. I also tried setting a VirtualHost on apache but it didn't work either: <VirtualHost *:80> ServerName SubDOmain.MyDomain.com ProxyRequests off <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass / http://localhost:3000/ ProxyPassReverse / http://localhost:3000/ ProxyPreserveHost on </VirtualHost> There is another virtual host working fine that redirects to another DocumentRoot. I'm running Node on root for testing purpose, but the node application owner is another user. All folders have 705 and files 664 Edit: I stopped apache and run my node app on port 80 and it working fine, I could access node app from my ip and domain.

    Read the article

  • issues with Nginx + Passenger Production setup - Loading time/request time delay

    - by Dani Cela
    having a bit of an issue relating to request time. I have NGINX as a proxy server for a ruby on rails app running passenger. I also have a postgresql database server which is running on its own VM separate from my nginx/application server. My issue is that when I try and access my products page which does a lot of database queries, my query takes maybe 3-4 seconds. The second I flood the web server with requests, i will choke out the web server and have requests take almost 20-30 seconds to process. The rails server and database server do not crash, and the usage is not that high. Each server has more than enough memory, even cpu usage on the rails server isn't more than 85%, albeit thats high but its not maxing it out. Is my problem related to my nginx proxy server? I dont really know how to fully explain this so if you have a question please ask it and I can clarify what I mean. EDIT: to see exactly what i mean relating to the database query, see http://207.245.4.215/products

    Read the article

  • apache adress based access control

    - by stijn
    I have an apache instance serving different locations, eg https://host.com/jira https://host.com/svn https://host.com/websvn https://host.com/phpmyadmin Each of these has access control rules based on ip adres/hostname. Some of them use the same configuration though, so I have to repeat the same rules each time: Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com Is there a way to make these reusable so that I don't have to change each instance everytime something changes? Ie, can I write this once, then use it for a couple of locations? Using SetEnvIf maybe? It would be nice if I could do something like this pseudo-config: <myaccessrule> Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com </myaccessrule> <Proxy /jira*> AccessRule = myaccessrule </Proxy> <Location /svn> AccessRule = myaccessrule </Location> <Directory /websvn> AccessRule = myaccessrule </Directory>

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >