Search Results

Search found 13716 results on 549 pages for 'proxy classes'.

Page 73/549 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Squid 2.7.6 not honoring ACL rules

    - by peppery
    Hello there, I have a /24 block of IP addresses assigned to a single server that I have been attempting to install Squid on an Ubuntu server machine. All of the IP addresses are set up correctly (aliases of eth0) in /etc/networking and work as they should be, using cURL I can specify an interface and it goes out on the correct address as it should be. I would like Squid to take the incoming IP address the request was sourced to and proxy the request out on the same IP (e.g incoming 123.123.123.1:3128 - 123.123.123.1, .2 - .2, etc) and have set up these ACL rules in /etc/squid.conf acl ip1 myip x.x.x.1 tcp_outgoing_address x.x.x.1 ip1 acl ip2 myip x.x.x.2 tcp_outgoing_address x.x.x.2 ip2 acl ip3 myip x.x.x.3 tcp_outgoing_address x.x.x.3 ip3 and so on, as this seems to be the only way to do what I want (from research). However, after much frustration, Squid seems to be ignoring these rules and sending requests out on the default interface. Does anybody have any suggestions? Thanks.

    Read the article

  • How can I access blocked sites while in China?

    - by Samuurai
    I have some colleagues who need to go to China for work, however while they're there, they can't access a lot of sites. One of which is GMail (Google Apps), which they need for work. We have a UK based Ubuntu server, which I have root access for. What can I do for them? I thought about a Squid proxy, but it might rely on their hotel having port 8080 open, so, not ideal. Are there any workarounds or other solutions?

    Read the article

  • Setting up VPS hosting

    - by RobinFTW
    I'm trying to set up my own vps hosting. It wont be a paid service, just an experiment for me and some nerdy friends. What I'd like to be able to do is this: Run multiple virtual servers on 1 external IP. These servers can run anything from Minecraft servers to simple http servers. They will also need to be accessible thru SSH. What I don't get is how I can address these servers using domain names. I've done some research and found out that I could use Vhosts with Apache. However this only applies to http servers. It was also suggested I'd use a reverse proxy(squid) but this also only applies to http requests. I could just use different ports for different servers, but thats not ideal and not what I want. Can someone suggest a setup? Maybe some tutorials or anything.

    Read the article

  • What alternative is there to Nginx that supports http keep-alive between backends ?

    - by felace
    Hi. I recently asked a question about how to keep a backend connection persistent using Nginx, but found out it wasn't possible anyway, It is an HTTP/1.0 proxy without the ability for keep-alive requests yet. (As a result, backend connections are created and destroyed on every request.) It works all fine right now (since the connection between client and Nginx is kept alive and the result is simply the same), but I don't want to establish a new connection every single time a new request is received ,even if it's on a unix domain socket. So, what software (preferably open-source and not too tedious to configure) do you recommend to accomplish that such connections ?

    Read the article

  • Finding a way to enter gmail in Company

    - by stckvrflw1
    Hello all, I am entering network over DNS's of my company. Here my company blocks lots of IP's for reasons like entertainment, sports, music, messaging boards etc. General e-mail is also one of those topics and I can't enter gmail.com. The proxy sites are also blocked in the company and the one's I have found (by spending much afford) are not accepting cookies. Also I am not able to enter the gmail from Igoogle too, this is also blocked. How can I enter gmail ? Thanks.

    Read the article

  • SSH tunnel doesn't work

    - by s1ck
    I am trying to use my server as a "proxy" with ssh. However, setting up tunneling with ssh -D localhost:8000 user@myserver does not work. I tested this on various machines with ssh and putty - It connects just fine, but when I set my browser settings accordingly, I just get an error "Connection has been reset". I tried monitoring the traffic with wireshark, but I didn't even see some tunnel-traffic. I explicitly set AllowTcpForwarding to "yes" but I still can't use the tunnel. When running ssh in verbose mode, I don't get any errors but debug1: Connection to port 8000 forwarding to socks port 0 requested. debug1: channel 3: new [dynamic-tcpip] debug1: channel 3: free: dynamic-tcpip, nchannels 4 What am I doing wrong?

    Read the article

  • Iptable Rule to redirect all traffic requesting a specific domain

    - by user548971
    I'm on a simple linux proxy. I'd like to add iptable rules to drop all requests for a specific domain. I figured I run a dig command to get the ip addresses for the domain and then add an iptable rule for each one. It seems, however, that it doesn't work to bind to more than one ip address. So, it seems I need to add ip ranges like this... iptables -I FORWARD -p tcp -m iprange --dst-range 66.220.144.0-66.220.159.255 --dport 443 -j DROP That seems to work. However, it has proven pretty problematic to parse the output of dig and correctly create the appropriate iptable rules. Is there a better way? Thanks! EV

    Read the article

  • How do I check if a process (Firefox) has quit?

    - by Al_Jehle
    I am using Ubuntu 12.04 64-bit, with all updates installed. I made a simple shell script that starts a SOCKS5 tunnel and launches Firefox (with correct network proxy settings) to use the tunnel. How do I recognize when Firefox has ended (when I close it) so that I can close the tunnel? Also, it would be awesome if I could run this in the background, but not necessary. #!/bin/sh ssh -fCN -D 10000 server.com firefox //To lauch firefox using Ubuntu ? Code to determine when firefox has quit Code that kills the tunnel

    Read the article

  • Tunnel out to internet

    - by case1352
    I'm on a network with no internet access, but I have SSH access to a server that sits on my internal network, and the internet. I would like certain programs to be able to access the internet, like windows update and my antivirus software etc. If I install a proxy server on that server I can use the internet from my pc. But I don't want to do that. Is there a way that I can configure a web browser and perhaps putty to let me "tunnel out?" through the server to the internet.

    Read the article

  • Use multiple WSGI mount points in Apache with an Nginx reverse proxy

    - by Thomas
    I am trying to set up multiple virtual hosts on the same server with Nginx and Apache and have run into a curious configuration issue. I have nginx is configured with a generic upstream to apache. upstream backend { server 1.1.1.1:8080; } I'm trying to set up multiple subdomains in nginx that hit different mountpoints in apache. Each would act like the following examples. server { listen 80; server_name foo.yoursite.com; location / { proxy_pass http://backend/bar/; include /etc/nginx/proxy.conf; } ... } server { listen 80; server_name delta.yoursite.com; location / { proxy_pass http://backend/gamma/; include /etc/nginx/proxy.conf; } ... } These mountpoints are pointed at django projects, however each of the url entries are coming back prepended with the apache mountpoint path. So, if I called the django url entry for foo.yoursite.com/wiki/biz/, django appears to be returning foo.yoursite.com/bar/wiki/biz/. Similarly, if I call for the url entry for delta.yoursite.com/wiki/biz/, I get delta.yoursite.com/gamma/wiki/biz/. Is there any way get rid of the prefix being returned on the url entries by django and apache?

    Read the article

  • Problem using structured data with sproxy-generated proxy c++ class

    - by Odrade
    I am attempting to communicate structured data types between a Visual C++ client application and an ASP.NET web service. I'm am having issues whenever any parameter or return type is not a basic type (e.g. string, int, float, etc). To illustrate the issue, I created the following ASP.NET web service: namespace TestWebService { [WebService(Namespace = "http://localhost/TestWebService")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [ToolboxItem(false)] public class Service1 : System.Web.Services.WebService { [WebMethod] public TestData StructuredOutput() { TestData td = new TestData(); td.data = 1729; return td; } } public class TestData { public int data; } } To consume the service, I created a dirt-simple Visual C++ client in VS2005. I added a web reference to the project, which caused sproxy to generate a proxy class for me. With the generated header properly included, I attempted to invoke the service like this: int _tmain(int argc, _TCHAR* argv[]) { CoInitialize(NULL); Service1::CService1 ws; Service1::TestData td; HRESULT hr = ws.StructuredOutput(&td); //data is returned as expected CoUninitialize(); return 0; } // crashes here with access violation The call to StructuredOutput returns the data as expected, but an access violation occurs on destruction of the CService1 object. The access violation is occurring here (from atlsoap.h): void UninitializeSOAP() { if (m_spReader.p != NULL) { m_spReader->putContentHandler(NULL); //access violation m_spReader.Release(); } } I see the same behavior when using a TestData object as an input parameter, or when using any other structured data types as input or output. When I use basic types for input/output from the web service I do not experience these errors. Any ideas about why this might be happening? Is sproxy screwing something up, or am I? NOTE: I'm aware of gSOAP and the wsdl2h tool, but those aren't freely available for commercial use (and nobody here is going to buy a license). I am open to alternatives for generating the c++ proxy, as long as they are free for commercial use.

    Read the article

  • WCF: collection proxy type on client

    - by Unholy
    I have the following type in wsdl (it is generated by third party tool): <xsd:complexType name="IntArray"> <xsd:sequence> <xsd:element maxOccurs="unbounded" minOccurs="0" name="Elements" type="xsd:int" /> </xsd:sequence> </xsd:complexType> Sometimes Visual Studio generates: public class IntArray : System.Collections.Generic.List<int> {} And sometimes it doesn't generate any proxy type for this wsdl and just uses int[]. Collection type in Web Service configuration is System.Array. What could be the reason for such upredictable behavior? Edited: I found the way how I can reproduce this behavior. For examle we have two types: <xsd:complexType name="IntArray"> <xsd:sequence> <xsd:element maxOccurs="unbounded" minOccurs="0" name="Elements" type="xsd:int" /> </xsd:sequence> </xsd:complexType> <xsd:complexType name="StringArray"> <xsd:sequence> <xsd:element maxOccurs="unbounded" minOccurs="0" name="Elements" type="xsd:string" /> </xsd:sequence> </xsd:complexType> VS generates: public class IntArray : System.Collections.Generic.List<int> {} public class StringArray : System.Collections.Generic.List<string> {} Now I change StringArray type: <xsd:complexType name="StringArray"> <xsd:sequence> <xsd:element maxOccurs="unbounded" minOccurs="0" name="Elements" type="xsd:string" /> <xsd:any minOccurs="0" maxOccurs="unbounded" namespace="##any" processContents="lax" /> </xsd:sequence> <xsd:anyAttribute namespace="##any" processContents="lax"/> </xsd:complexType> VS generates proxy type for StringArray only. But not for IntArray.

    Read the article

  • How to proxy calls to the instance of an object

    - by mr.b
    Edit: Changed question title from "Does C# allow method overloading, PHP style (__call)?" - figured out it doesn't have much to do with actual question. Also edited question text. What I want to accomplish is to proxy calls to a an instance of an object methods, so I could log calls to any of its methods. Right now, I have code similar to this: class ProxyClass { static logger; public AnotherClass inner { get; private set; } public ProxyClass() { inner = new AnotherClass(); } } class AnotherClass { public void A() {} public void B() {} public void C() {} // ... } // meanwhile, in happyCodeLandia... ProxyClass pc = new ProxyClass(); pc.inner.A(); // need to write log message like "method A called" pc.inner.B(); // need to write log message like "method B called" // ... So, how can I proxy calls to an object instance in extensible way? Method overloading would be most obvious solution (if it was supported in PHP way). By extensible, meaning that I don't have to modify ProxyClass whenever AnotherClass changes. In my case, AnotherClass can have any number of methods, so it wouldn't be appropriate to overload or wrap all methods to add logging. I am aware that this might not be the best approach for this kind of problem, so if anyone has idea what approach to use, shoot. Thanks!

    Read the article

  • Threading in client-server socket program - proxy sever

    - by crazyTechie
    I am trying to write a program that acts as a proxy server. Proxy server basically listens to a given port (7575) and sends the request to the server. As of now, I did not implement caching the response. The code looks like ServerSocket socket = new ServerSocket(7575); Socket clientSocket = socket.accept(); clientRequestHandler(clientSocket); I changed the above code as below: //calling the same clientRequestHandler method from inside another method. Socket clientSocket = socket.accept(); Thread serverThread = new Thread(new ConnectionHandler(client)); serverThread.start(); class ConnectionHandler implements Runnable { Socket clientSocket = null; ConnectionHandler(Socket client){ this.clientSocket = client; } @Override public void run () { try { PrxyServer.clientRequestHandler(clientSocket); } catch (Exception ex) { ex.printStackTrace(); } } } Using the code, I am able to open a webpage like google. However, if I open another web page even I completely receive the first response, I get connection reset by peer expection. 1. How can I handle this issue Can i use threading to handle different requests. Can someone give a reference where I look for example code that implements threading. Thanks. Thanks.

    Read the article

  • Maintain order of messages via proxies to app servers

    - by David Turner
    Hi, I am receiving messages from a 3rd party via a http post, and it is important that the order the messages hit our infrastructure is maintained through the load balancers and proxies until it hits our application server. Quick Diagram. (proxies in place due to security requirements.) [ACE load balancer] - [2 proxies] - [Application Servers] or maybe [ACE load balancer] - [2 proxies] - [ACE load balancer] - [Application Servers] My idea was that I would setup the load balancers in active-passive mode, to force all messages to use one proxy, and then both the proxies would hit a second load balancer that would be configured in active passive to hit one application server. Whilst the above is not ideal, it does give me resilience, and once the message is in my app servers, I enter a stateless world, and load balance across both nodes of my cluster. However, I am concerned that even a single proxy could send messages out of order, perhaps if 2 messages are recived very close together, message 2 might get processed faster than message 1. Is this possible? Likely? Is there a simple open source proxy (MOD_PROXY?) that can be easily configured to just pass messages through it, and to guarantee to send the messages through in the order they are received. If so which, and finally links to how I should configure it would be great. In fact any links to articles around avoiding "out of order" messages using hardware would be gratefully received. Thanks, ps for those that are interested, the app is a java spring integration application currently on a appliation server.

    Read the article

  • Mixing RewriteRule and ProxyPass in Apache

    - by Taylor L
    I was working on debugging an issue today related to mixing mod_proxy and mod_rewrite together and I ended up having to use balancer://mycluster in the RewriteRule in order to stop receiving a 404 error from Apache. I have two questions: 1) Is there any other way to get the rewritten URL to go through the balancer without adding balancer://mycluster into the RewriteRule? 2) Is there a way to define all the parameters I defined in ProxyPass (stickysession=JSESSIONID|jsessionid scolonpathdelim=On lbmethod=bytraffic nofailover=Off) in either the <Proxy> or RewriteRule? I'm concerned the requests that match the new RewriteRule won't load balance in the same fashion as those that go through ProxyPass (like /app1/something.do)? Below are the relevant sections of the httpd.conf. I am using Apache 2.2. <Proxy balancer://mycluster> Order deny,allow Allow from all BalancerMember ajp://my.domain.com:8009 route=node1 BalancerMember ajp://my.domain.com:8009 route=node2 </Proxy> ProxyPass /app1 balancer://mycluster/app1 stickysession=JSESSIONID|jsessionid scolonpathdelim=On lbmethod=bytraffic nofailover=Off ProxyPassReverse /app1 ajp://my.domain.com:8009/app1 ... RewriteRule ^/static/cms/image/(.*)\.(.*) balancer://mycluster/app1/$1.$2 [P,L]

    Read the article

  • Problem uploading app to google app engine

    - by Oberon
    I'm having problems uploading an app to the google-app-engine from my work place. I believe the problem is related to proxy, because I do not see the same problem when following the same procedure from home. (I do not specify HTTP_PROXY from home). These are the commands I run: HTTP_PROXY=http://proxy.<thehostname>.com:8080 HTTP_PROXY=https://proxy.<thehostname>.com:8080 appcfg.py --insecure update myappfolder When running the commands I get prompted for email and password, as expected, but after that it immediately exits with this errormessage: Error 302: --- begin server output --- <HTML> <HEAD> <TITLE>Moved Temporarily</TITLE> </HEAD> <BODY BGCOLOR="#FFFFFF" TEXT="#000000"> <H1>Moved Temporarily</H1> The document has moved <A HREF="https://www.google.com/accounts/ClientLogin">here</A>. </BODY> </HTML> --- end server output --- Note: I added the --insecure option because else it gave a warning of missing ssl module. Any idea how to solve or workaround this problem?

    Read the article

  • Java abstract visitor - guarantueed to succeed? If so, why?

    - by disown
    I was dealing with hibernate, trying to figure out the run-time class behind proxied instances by using the visitor pattern. I then came up with an AbstractVisitable approach, but I wonder if it will always produce correct results. Consider the following code: interface Visitable { public void accept(Visitor v); } interface Visitor { public void visit(Visitable visitorHost); } abstract class AbstractVisitable implements Visitable { @Override public void accept(Visitor v) { v.visit(this); } } class ConcreteVisitable extends AbstractVisitable { public static void main(String[] args) { final Visitable visitable = new ConcreteVisitable(); final Visitable proxyVisitable = (Visitable) Proxy.newProxyInstance( Thread.currentThread().getContextClassLoader(), new Class<?>[] { Visitable.class }, new InvocationHandler() { @Override public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { return method.invoke(visitable, args); } }); proxyVisitable.accept(new Visitor() { @Override public void visit(Visitable visitorHost) { System.out.println(visitorHost.getClass()); } }); } } This makes a ConcreteVisitable which inherits the accept method from AbstractVisitable. In c++, I would consider this risky, since this in AbstractVisitable could be referencing to AbstractVisitable::this, and not ConcreteVisitable::this. I was worried that the code under certain circumstances would print class AbstractVisible. Yet the code above outputs class ConcreteVisitable, even though I hid the real type behind a dynamic proxy (the most difficult case I could come up with). Is the abstract visitor approach above guaranteed to work, or are there some pitfalls with this approach? What guarantees are given in Java with respect to the this pointer?

    Read the article

  • Nginx as a proxy to Tomcat

    - by user36812
    Pardon me, this is my first attempt at Nginx-Jetty instead of Apache-JK-Tomcat. I deployed myapp.war file to $JETTY_HOME/webapps/, and the app is accessible at the url: http://myIP:8080/myapp I did a default installation of Nginx, and the default Nginx page is accessible at myIP Then, I modified the default domain under /etc/nginx/sites-enabled to the following: server { listen 80; server_name mydomain.com; access_log /var/log/nginx/localhost.access.log; location / { #root /var/www/nginx-default; #index index.html index.htm; proxy_pass http://127.0.0.1:8080/myapp/; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www/nginx-default; } } Now I get the index page of mypp (running in jetty) when I hit myIP, which is good. But all the links are malformed. eg. The link to css is mydomain.com/myapp/css/style.css while what it should have been is mydomain.com/css/style.css. It seems to be mapping mydomain.com to 127.0.0.1:8080 instead of 127.0.0.1:8080/myapp/ Any idea what am missing? Do I need to change anything on the Jetty side too?

    Read the article

  • RewriteRule and Proxy

    - by Felipe Alvarez
    Two servers. example.net, and example.com On http://example.net, My httpd.conf contains # example.net RewriteEngine On RewriteRule ^/felipetest2 http://example.com/webpage [P] I am getting a 302 Moved, which is pointing to http://example.net/webpage, but should be http://example.com/webpage What's going on? I have control over both .net and .com servers in these examples. I know I can do the same with ProxyPass and ProxyPassReverse, but I'm trying to get my head around this one. Edit: Main Question: How do I show a maintenance page, without changing URL in the browser? On same domain, or across different domains?

    Read the article

  • Squid configuration for proxy server

    - by Ian Rob
    I have a server with 10 ip's that I want to give access to some friends via authentication but I'm stuck on squid's config file. Let's say I have these ip's available on my server: 212.77.23.10 212.77.1.10 68.44.82.112 And I want to allocate each one of them to a different user like so: 212.77.23.10 goes to user manilodisan using password 123456 212.77.1.10 goes to user manilodisan1 using password 123456 68.44.82.112 goes to user manilodisan2 using password 123456 I managed to add the passwords and authentication works ok but how do I do to restrict one user to one of the available ip's? I have a basic setup from different bits I found over the internet but nothing seems to work. Here's my squid.conf (all comments are removed to make it lighter): acl ip1 myip 212.77.23.10 acl ip2 myip 212.77.1.10 tcp_outgoing_address 212.77.23.10 ip1 tcp_outgoing_address 212.77.1.10 ip2 http_port 8888 visible_hostname weezie auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid-passwd acl ncsa_users proxy_auth REQUIRED http_access allow ncsa_users acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access deny all icp_access allow all hierarchy_stoplist cgi-bin ? access_log /var/log/squid/access.log squid acl QUERY urlpath_regex cgi-bin \? cache deny QUERY refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT hosts_file /etc/hosts forwarded_for off coredump_dir /var/spool/squid

    Read the article

  • IMAP proxy as a POP3 hub?

    - by mailman stan
    Simple scenario, complicated technology: One family receiving mail from five email addresses via POP3 into one Outlook inbox on a single PC. Now we'd like to be able to replicate that single inbox across multiple devices (eg. desktop PC, laptop, netbook, smartphone). If we continue using POP3 as the mail transfer protocol, messages will be downloaded to one device and will not be visible to the others; replies will likewise be isolated on the sending machine. If we switch to IMAP, I understand that we can have multiple devices maintaining a shared view of an inbox hosted at the server end, but what about multiple accounts? I tried changing the account configuration in Outlook to fetch from the mail providers' IMAP service instead of POP3, which does give a shared view across multiple devices but also causes Outlook to create a separate inbox and PST for each account. This is awkward because it means there are five separate folders that need to be checked, and Outlook tools like search filters and rules don't seem to work across accounts. To get what I want (five accounts delivered into one shared mailbox) it seems that I would need some sort of intervening server that collects mail (using POP3) from all our accounts into a single inbox while preserving the original destination addresses, and then serves it up to all our devices using IMAP. Is this workable? Is it a good approach? Is there an easier way?

    Read the article

  • Nginx as a proxy to Jetty

    - by user36812
    Pardon me, this is my first attempt at Nginx-Jetty instead of Apache-JK-Tomcat. I deployed myapp.war file to $JETTY_HOME/webapps/, and the app is accessible at the url: http://myIP:8080/myapp I did a default installation of Nginx, and the default Nginx page is accessible at myIP Then, I modified the default domain under /etc/nginx/sites-enabled to the following: server { listen 80; server_name mydomain.com; access_log /var/log/nginx/localhost.access.log; location / { #root /var/www/nginx-default; #index index.html index.htm; proxy_pass http://127.0.0.1:8080/myapp/; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www/nginx-default; } } Now I get the index page of mypp (running in jetty) when I hit myIP, which is good. But all the links are malformed. eg. The link to css is mydomain.com/myapp/css/style.css while what it should have been is mydomain.com/css/style.css. It seems to be mapping mydomain.com to 127.0.0.1:8080 instead of 127.0.0.1:8080/myapp/ Any idea what am missing? Do I need to change anything on the Jetty side too?

    Read the article

  • DNAT to 127.0.0.1 with iptables / Destination access control for transparent SOCKS proxy

    - by cdauth
    I have a server running on my local network that acts as a router for the computers in my network. I want to achieve now that outgoing TCP requests to certain IP addresses are tunnelled through an SSH connection, without giving the people from my network the possibility to use that SSH tunnel to connect to arbitrary hosts. The approach I had in mind until now was to have an instance of redsocks listening on localhost and to redirect all outgoing requests to the IP addresses I want to divert to that redsocks instance. I added the following iptables rule: iptables -t nat -A PREROUTING -p tcp -d 1.2.3.4 -j DNAT --to-destination 127.0.0.1:12345 Apparently, the Linux kernel considers packets coming from a non-127.0.0.0/8 address to an 127.0.0.0/8 address as “Martian packets” and drops them. What worked, though, was to have redsocks listen on eth0 instead of lo and then have iptables DNAT the packets to the eth0 address instead (or using a REDIRECT rule). The problem about this is that then every computer on my network can use the redsocks instance to connect to every host on the internet, but I want to limit its usage to a certain set of IP addresses only. Is there any way to make iptables DNAT packets to 127.0.0.1? Otherwise, does anyone have an idea how I could achieve my goal without opening up the tunnel to everyone? Update: I have also tried to change the source of the packets, without any success: iptables -t nat -A POSTROUTING -p tcp -s 192.168.1.0/24 -d 1.2.3.4 -j SNAT --to-source 127.0.0.1 iptables -t nat -A POSTROUTING -p tcp -s 192.168.1.0/24 -d 127.0.0.1 -j SNAT --to-source 127.0.0.1

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >