Search Results

Search found 6882 results on 276 pages for 'ftp proxy'.

Page 87/276 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • vsftp hangs at "150 Here comes the directory listing."

    - by Rikr
    In a vsftpd server enviroment, shared various directories from nfs mountpoints, I can log in without problem, but when I send the first "ls", the vsftp give me the directory listing: lftp [email protected]:~ ls -rw-rw-rw- 1 1160 1016 392 Jun 06 09:28 test.gif but not give me the shell again (lftp client). In the server log I can see that the last message is: "150 Here comes the directory listing." Why happend this?

    Read the article

  • What firewall ports do I need to open when using FTPS?

    - by anoopm
    I need to access an ftps server (vsftpd) on a vendor's site. The vendor has a firewall in front of the ftps server and I have a firewall in front of my ftps client. I understand that ports 990, 991 and maybe 989 need to be opened up for control traffic. When looking at it from the vendor's firewall perspective, should these ports be opened up for both inbound and outbound traffic? What about ports for the DATA channel? Do I have to open all ports above 1000? And should I do it for both inbound and outbound traffic? TIA for your help.

    Read the article

  • Easily manage vsftpd virtual users?

    - by Phil
    I have a vsftpd server configured with many virtual users. logins are stored in a Berkeley DB file One configuration file exists for each user to define his permissions (read-only or read-write, home directory, etc.). To do that, I use the user_config_dir parameter (set in vsftpd.conf). I am wondering if it would be possible to manage these virtual users from a simple GUI (such as web interface). I have found some tools but they are limited to generic vsftpd configuration, not virtual users management. Otherwise, PAM-MySQL seems to be a good way to manage users efficiently but only username/password and logs can be stored in database, not permissions. Finally, I've found this thread, but the solution is a bit awkward... Is there any way to easily manage the vsftpd users ?

    Read the article

  • Connecting SVN from Remote Server

    - by Ashish
    I have hosted my repository in assebbla & it works fine. now I want to write a script that can automate the build process : 1. Take the code from assembla repository 2. Make a dump and copy it onto my web server. what I have researched from net states that use of commands like svn co svn+ssh://[email protected]/home/svn/test I believe I need to open Shell on my server and type these commands but shell has been disabled from my server admin. I tried to run the same from php using exec , admin has disabled that too. (am using shared hosting and want to do a automated deployment using these simple steps. i don't want to bring my local system in this process) now am not sure even if I get the shell access open to my server these commands like svn will work there as I don't have SVN installed on my server (its installed on assembla). kindly let me know if any more explanation is required regarding the same or if am going on the wrong track. Am a newbie so please be descriptive in answering :) Thanx in advance Ace

    Read the article

  • vsftpd chroot_local_user does nothing

    - by Reinderien
    Hello all. I'm setting up a vsftpd server on: Linux 2.6.32-26-server #48-Ubuntu SMP Wed Nov 24 10:28:32 UTC 2010 x86_64 GNU/Linux When I set chroot_local_user=YES, there is no effect (I can still see / when I log in). There is nothing in syslog or /var/log/vsftpd.log to indicate what's wrong. I know that I'm editing the right conf file and that other settings do come into effect when I restart the daemon, because these work: ssl_enable=YES force_local_data_ssl=YES force_local_logins_ssl=YES Any idea what's wrong? Thanks.

    Read the article

  • IIS7 FTP7.5 and FXP

    - by cralexns
    I have a FTP7.5 on my Windows 2008 server and would like to use this for FXP, however regardless of the other server in question I can never complete a transfer. It always fails with this message: [L] 501-Server cannot accept argument. [L] Win32 error: The parameter is incorrect. [L] Error details: Client IP on the control channel didn't match the client IP on the data channel. I've tested normal use and it works fine, is there any setting I can change to allow for FXP?

    Read the article

  • tailwatchd - chkservd on host.domain.com status: hang

    - by Zim3r
    The chkservd sub-process with pid 17420 was running for 602 seconds. The sub-process was terminated as it exceeded the time between checks of 300 seconds. Please check /var/log/chkservd.log and /usr/local/cpanel/logs/tailwatchd_log to discover the I was notified for this error by email on the destination server while transferring server. what does it mean ? and also this happened: ftpd failed @ Wed Aug 8 11:26:38 2012. A restart was attempted automagically. Service Check Method: [socket connect] Reason: Timeout while trying to get data from service: Died at /usr/local/cpanel/Cpanel/TailWatch/ChkServd.pm line 607. Number of Restart Attempts: 1 Startup Log: Starting pure-config.pl: Running: /usr/sbin/pure-ftpd -O clf:/var/log/xferlog --daemonize -A -c50 -B -C8 -D -fftp -H -I15 -lextauth:/var/run/ftpd.sock -L10000:8 -m4 -s -U133:022 -u100 -Oxferlog:/usr/local/apache/domlogs/ftpxferlog -k99 -Z -Y1 -JHIGH:MEDIUM:+TLSv1:!SSLv2:+SSLv3 [ OK ] Starting pure-authd:

    Read the article

  • Is there a bug with Apache 2.2 and content filters (and maybe mod_proxy)?

    - by asciiphil
    I'm running Apache 2.2.15-29 on RHEL 6 (actually Scientific Linux 6.4) and I'm trying to set up a reverse proxy with content rewriting so all of the links on the proxied web pages are rewritten to reference the proxy host. I'm running into a problem with some of the content rewriting and I'd like to know if this is a bug or if I'm doing something wrong (and how to do it right, if applicable). I'm proxying a subdirectory on an internal host (internal.example.com/foo) onto the root of an external host (external.example.com). I need to rewrite HTML, CSS, and Javascript content to fix all of the URLs. I'm also hosting some content locally on the external host, which I don't think is a problem but I'm mentioning here for completeness. My httpd.conf looks roughly like this: <VirtualHost *:80> ServerName external.example.com ServerAlias example.com # Serve all local content directly, reverse-proxy all unknown URIs. RewriteEngine On RewriteRule ^(/(index.html?)?)?$ http://internal.example.com/foo/ [P] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -f [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -d RewriteRule ^.*$ - [L] RewriteRule ^/~ - [L] RewriteRule ^(.*)$ http://internal.example.com$1 [P] # Standard header rewriting. ProxyPassReverse / http://internal.example.com/foo/ ProxyPassReverseCookieDomain internal.example.com external.example.com ProxyPassReverseCookiePath /foo/ / # Strip any Accept-Encoding: headers from the client so we can process the pages # as plain text. RequestHeader unset Accept-Encoding # Use mod_proxy_html to fix URLs in text/html content. ProxyHTMLEnable On ProxyHTMLURLMap http://internal.example.com/foo/ / ProxyHTMLURLMap http://internal.example.com/foo / ProxyHTMLURLMap /foo/ / ## Use mod_substitute to fix URLs in CSS and Javascript #<Location /> # AddOutputFilterByType SUBSTITUTE text/css # AddOutputFilterByType SUBSTITUTE text/javascript # Substitute "s|http://internal.example.com/foo/|/|nq" #</Location> # Use mod_ext_filter to fix URLs in CSS and Javascript ExtFilterDefine fixurlcss mode=output intype=text/css cmd="/bin/sed -rf /etc/httpd/fixurls" ExtFilterDefine fixurljs mode=output intype=text/javascript cmd="/bin/sed -rf /etc/httpd/fixurls" <Location /> SetOutputFilter fixurlcss;fixurljs </Location> </VirtualHost> The text/html rewriting works just fine. When I use either mod_substitute or mod_ext_filter, the external server sends the pages as Transfer-Encoding: chunked, sends all of the data, and then closes the connection without sending the final, zero-length chunk. Some HTTP clients are unhappy with this. (Chrome won't process any content sent in this way, for example, so the pages don't get CSS applied to them.) Here's a sample wget session: $ wget -O /dev/null -S http://external.example.com/include/jquery.js --2013-11-01 11:36:36-- http://external.example.com/include/jquery.js Resolving external.example.com (external.example.com)... 192.168.0.1 Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Date: Fri, 01 Nov 2013 15:36:36 GMT Server: Apache Last-Modified: Tue, 29 Oct 2013 13:09:10 GMT ETag: "1d60026-187b8-4e9e0ec273e35" Accept-Ranges: bytes Vary: Accept-Encoding X-UA-Compatible: IE=edge,chrome=1 Content-Type: text/javascript;charset=utf-8 Connection: close Transfer-Encoding: chunked Length: unspecified [text/javascript] Saving to: `/dev/null' [ <=> ] 100,280 --.-K/s in 0.005s 2013-11-01 11:36:37 (19.8 MB/s) - Read error at byte 100280 (Success).Retrying. --2013-11-01 11:36:38-- (try: 2) http://external.example.com/include/jquery.js Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 416 Requested Range Not Satisfiable Date: Fri, 01 Nov 2013 15:36:38 GMT Server: Apache Vary: Accept-Encoding Content-Type: text/html;charset=utf-8 Content-Length: 260 Connection: close The file is already fully retrieved; nothing to do. Am I doing something wrong? Am I hitting some sort of Apache bug? What do I need to do to get it working? (Note that I'd prefer solutions that work within RHEL-6-packaged RPMs and upgrading to Apache 2.4 would be a last resort, as we have a lot of infrastructure built around 2.2 on this system at the moment.)

    Read the article

  • Migrating to a new server

    - by KThompson
    I was asked by a client to move a website to a new server. Normally I would make a local copy by downloading it and then uploading it to the new server after making any necessary configurations of course. This particular client has like 1 million images ranging up 1MB and I still haven't found a solution as to how to move these properly. I have tried zipping them on the server first but the file gets too big and it seems to just stop. Anyway, the client told me he can set up a "portal" so I don't have to download and re-upload the files. I have never heard of this no can I find what this is. Any of you know something that fits this description or solution to my image problem? Thanks in advance

    Read the article

  • A free WebDAV client for Windows?

    - by Brian L.
    I'd like to copy files from a network drive to a SharePoint site (perhaps as a mapped drive). What's a good client to do so? Obviously Windows (XP) Explorer is bad, I"m trying RichCopy at the moment. Any opinions on CoreFTP? Are there any recommended open-source WebDAV clients?

    Read the article

  • SFTP (or similar) server automated setup for group spaces

    - by spikeheap
    I need to build a dedicated machine which will be used to allow our clients to upload and download files in a secure manner. Each client has multiple users, and I would rather not hand out generic client users which are used by multiple people. Each client should have access to their files only, and no others. There is no use-case (yet) for multiple clients interacting with a single file or space. Is there an existing solution to automating the creation and maintenance of these accounts, preferably with a view to integration with LDAP? Currently it looks like if we want to use SFTP with chrooted spaces they will need to be set up manually (or an automation hand-rolled). If a solution exists for a different (but still secure) transfer method, such as FTPS, I'm all ears.

    Read the article

  • SFTP sending files between laptops on Ubuntu

    - by twigg
    I want to transfer files between two Ubuntu systems using SFTP. I have got it set-up and I can connect to the other laptop, ping it and see its file list using sftp> dir. I can see the files on the other system. But when I call get filename.deb it comes up saying Fetching /home/user/filename.deb to filename.deb 0% 0 0.0KB/s --:-- ETA and then drops back to the sftp command promote without transferring anything. Have I missed something?

    Read the article

  • RDP through TCP Proxy

    - by johng100
    Hi, First time in Stackoverflow and I'm hoping someone can help me. I'm looking at a proof of concept to pass RDP traffic through a TCP Proxy/tunnel which will pass through firewalls using HTTPS. The problem has to do with deploying images to machines and so it can't be assumed that the .NET framework will be present, so C++ is being used at the deployment end of a connection. The basic system I have at present is a program which listens for client connections on a port then passes any data to a WCF service which stores it as a byte array. A deployment machine (using GSoap and C++) polls the WCF service for messages and if it finds them then passes the data onto the target server process via sockets. I know this sounds horrible, but it works for simple test clients and server passing data to and from simple test client and server programs via this WCF/C++/C# proxy layer. But I have to support traffic from RDP, VNC and possibly others, so I need a transparent proxy to do this and am wondering whether the above approach is worth pursuing. I've read up on SSH tunneling and that seems a possibility. My basic question is is it possible to tunnel RDP traffic over HTTPS using custom code. Thanks John

    Read the article

  • CGLIB proxy error after spring bean definition loading into XmlWebApplicationContext at runtime

    - by VasylV
    I load additional singleton beans definitions at runtime from external jar file into existing XmlWebApplicationContext of my application: BeanFactory beanFactory = xmlWebApplicationContext.getBeanFactory(); DefaultListableBeanFactory defaultFactory = (DefaultListableBeanFactory)beanFactory; final URL url = new URL("external.jar"); final URL[] urls = {url}; ClassLoader loader = new URLClassLoader(urls, this.getClass().getClassLoader()); defaultFactory.setBeanClassLoader(loader); final ClassPathBeanDefinitionScanner scanner = new ClassPathBeanDefinitionScanner(defaultFactory); final DefaultResourceLoader resourceLoader = new DefaultResourceLoader(); resourceLoader.setClassLoader(loader); scanner.setResourceLoader(resourceLoader); scanner.scan("com.*"); Object bean = xmlWebApplicationContext.getBean("externalBean"); After all above xmlWebApplicationContext contains all external definitions of beans. But when i am trying to get bean from context exception is thrown: Couldn't generate CGLIB proxy for class ... I saw in debug mode that in the bean initialization process first time proxy is generated by org.springframework.aop.aspectj.autoproxy.AspectJAwareAdvisorAutoProxyCreator and than it is tried to generate proxy with org.springframework.aop.framework.autoproxy.BeanNameAutoProxyCreator but fails with mentioned exception.

    Read the article

  • Using proxy models

    - by smallB
    I've created Proxy model by subclassing QAbstractProxyModel and connected it as a model to my view. I also set up source model for this proxy model. Unfortunately something is wrong because I'm not getting anything displayed on my listView (it works perfectly when I have my model supplied as a model to view but when I supply this proxy model it just doesn't work). Here are some snippets from my code: #ifndef FILES_PROXY_MODEL_H #define FILES_PROXY_MODEL_H #include <QAbstractProxyModel> #include "File_List_Model.h" class File_Proxy_Model: public QAbstractProxyModel { public: explicit File_Proxy_Model(File_List_Model* source_model) { setSourceModel(source_model); } virtual QModelIndex mapFromSource(const QModelIndex & sourceIndex) const { return index(sourceIndex.row(),sourceIndex.column()); } virtual QModelIndex mapToSource(const QModelIndex & proxyIndex) const { return index(proxyIndex.row(),proxyIndex.column()); } virtual int columnCount(const QModelIndex & parent = QModelIndex()) const { return sourceModel()->columnCount(); } virtual int rowCount(const QModelIndex & parent = QModelIndex()) const { return sourceModel()->rowCount(); } virtual QModelIndex index(int row, int column, const QModelIndex & parent = QModelIndex()) const { return createIndex(row,column); } virtual QModelIndex parent(const QModelIndex & index) const { return QModelIndex(); } }; #endif // FILES_PROXY_MODEL_H //and this is a dialog class: Line_Counter::Line_Counter(QWidget *parent) : QDialog(parent), model_(new File_List_Model(this)), proxy_model_(new File_Proxy_Model(model_)), sel_model_(new QItemSelectionModel(proxy_model_,this)) { setupUi(this); setup_mvc_(); } void Line_Counter::setup_mvc_() { listView->setModel(proxy_model_); listView->setSelectionModel(sel_model_); }

    Read the article

  • Couldn't upload files to Sharepoint site while passing through Squid Proxy

    - by Ecio
    Hi all, we have this issue: one of our employees is collaborating with a supplier and he needs to upload documents on a Sharepoint site hosted on the supplier's main site. In our environment we use Squid Proxy to allow people navigate on the net (we have NTLM authentication and users transparently authenticate while using IE and FF). It seems that this specific Sharepoint site is using Integrated Windows Authentication only, and according to some research on the net it seems that this can have troubles with proxies. More specifically, we have tried two Squid versions: with Squid 3.0 we are unable to login to the site (the browser loads an empty page) with Squid 2.7 (that supports "Connection Pinning") we are able to login into the site, move on the different sections BUT.. when we try to upload a file that is bigger than a couple of KiloBytes (i.e. 10KB) the browser loads an error page (i think it's a 401 unauthorized but i must verify it) we've tried changing a couple of Squid options (in 2.7), what we got is that when you try to upload the file you got an authentication box (just like the initial login) and it refuses to go on even if you enter the same authentication credentials. What's really strange is that when you try to upload a small file (i.e. a text or binary 1KB file) the upload succeeds. I initially thought that maybe there was something misconfigured on their Sharepoint site but I've tried also this site: www.xsolive.com (it's a sharepoint 2007 demo site) and I've experienced the same problem. Has any of you experienced such behaviour? Thanks! Of course we've suggested to the supplier to activate also Basic+SSL and we're waiting for their reply..

    Read the article

  • Flash plugin locks up Firefox, Chrome and Safari behind a corporate proxy, IE6 works fine

    - by Shevek
    At work I am forced by corporate policy to use IE6. Obviously this is not so good so I use FF for most of my browsing. However there is a problem once I have installed the Flash plug-in - FF locks up when trying to load Flash media. Looking at the status bar at the time of the lock up it appears this happens when the browser tries to get cross domain data. The Flash Active X plug-in in IE does not suffer this issue. I have tried it in a brand new profile in FF with Flash as the only plug in and it locks up. We have 2 different proxy servers and both exhibit the same problem. I have also tried Chrome and Safari and both lock up with the plug-in installed. So, has anyone else had this problem and solved it? Or, is there any way to disable cross domain data access in the flash plug-in? Or, is there any way to disable the "This site needs an additional plug-in" ribbon which appears when the plug-in is not installed. Many thanks!

    Read the article

  • Nginx proxy domain to another domain with no change URL

    - by Evgeniy
    My question is in the subj. I have a one domain, that's nginx's config of it: server { listen 80; server_name connect3.domain.ru www.connect3.domain.ru; access_log /var/log/nginx/connect3.domain.ru.access.log; error_log /var/log/nginx/connect3.domain.ru.error.log; root /home/httpd/vhosts/html; index index.html index.htm index.php; location ~* \.(avi|bin|bmp|css|dmg|doc|docx|dpkg|exe|flv|gif|htm|html|ico|ics|img|jpeg|jpg|js|m2a|m2v|mov|mp3|mp4|mpeg|mpg|msi|pdf|pkg|png|pps|ppt|pptx|ps|rar|rss|rtf|swf|tif|tiff|txt|wmv|xhtml|xls|xml|zip)$ { root /home/httpd/vhosts/html; access_log off; expires 1d; } location ~ /\.(git|ht|svn) { deny all; } location / { #rewrite ^ http://connect2.domain.ru/; proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_hide_header "Cache-Control"; add_header Cache-Control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0"; proxy_hide_header "Pragma"; add_header Pragma "no-cache"; expires -1; add_header Last-Modified $sent_http_Expires; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } I need to proxy connect3.domain.ru host to connect2.domain.ru, but with no URL changed in browser's address bars. My commented out rewrite line could solve this problem, but it's just a rewrite, so I cannot stay with the same URL. I know that this question is easy, but please help. Thank you.

    Read the article

  • Dynamic fowarding with SOCKS5 proxy [on hold]

    - by bh3244
    I'm building my own SOCKS5 client and HTTP library and am having trouble figuring out how things work with dynamic port forwarding. So far I can connect successfully with my SOCKS5 client, but from there on I am stuck. I am using the ssh -D command. Considering I have my local machine "home" and my server "server" and I wanted to use "server" as proxy for all connections I understand I would type ssh -D "localport" "serverhostname" on my local machine "home". This command I understand has ssh accept connections with the SOCKS5 protocol. So now if I want to connect to google.com(74.125.224.72:80) and issue a GET for the front page, I assume I would send the SOCKS5 client request and the server would respond back with a 0x00 "succeeded" and from then on I am connected and I would send the HTTP GET request and the server would respond back accordingly with the data. Now if I want to navigate to a different website, must I issue another SOCKS5 connection request for that sites IP/hostname? I'm confused if this is the way it is done, or if there is a program listening on the local port of the "server" and handling outgoing and incoming data. To reiterate: Do SOCKS5 proxies work by sending repeated SOCKS5 connection requests for different addresses or is there just one connection to a local port on "server" and another program on "server" handles the outgoing connection to the internet by using that local port to send and receive data to/from "home"?

    Read the article

  • Load balancing a Windows File Share using HA-Proxy

    - by NathanE
    After pulling my hair out over DFS I just had this weird and potentially dangerous idea come into my head whereby, just possibly, I might be able to use HA-Proxy to load balance a file share between servers. I've done some remedial packet traces and it does appear that TCP port 445 is the only thing involved in using Windows file sharing. I've always thought for many years that UDP 139, 135 etc were also involved in at least establishing the connection - but apparently not! So I setup a basic test: listen SMBTest *:445 mode tcp server Smb1 172.16.61.201:445 server Smb2 172.16.61.202:445 And you'll never guess what... it works??? (!) Now obviously there is the whole concern about synchronisation between the file servers (of course). That could easily be taken care of with a little bit of Robocopy script. And considering I only need a HA read-only file share there wouldn't be any issues with regard to file locking etc. Can anyone tell me if what I'm playing with here is fire? I really didn't think it would work at all and now I'm a little shocked. What would be the downsides? Could this be relied upon for a production environment?

    Read the article

  • Proxy settings in Java mail API

    - by coder
    I've written a piece of java code where user1 sends email to user2. I'm behind a proxy and hence I'm getting a javax.mail.MessagingException. How do I solve this problem? Here is the code- import java.util.Properties; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.PasswordAuthentication; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; public class Mail { public static void main(String[] args) { final String username = "[email protected]"; final String password = "abc"; Properties props = new Properties(); props = System.getProperties(); props.put("mail.smtp.auth", "true"); props.put("mail.smtp.starttls.enable", "true"); props.put("mail.smtp.host", "smtp.gmail.com"); props.put("mail.smtp.port", "587"); Session session = Session.getInstance(props, new javax.mail.Authenticator() { protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(username, password); } }); try { Message message = new MimeMessage(session); message.setFrom(new InternetAddress("[email protected]")); message.setRecipients(Message.RecipientType.TO, InternetAddress.parse("[email protected]")); message.setSubject("Testing Subject"); message.setText("Dear Mail Crawler," + "\n\n No spam to my email, please!"); Transport.send(message); System.out.println("Done"); } catch (MessagingException e) { throw new RuntimeException(e); } } }

    Read the article

  • Setup a Reverse Proxy with Nginx and Apache on EC2

    - by heavymark
    Good Day, I am currently using the free Amazon EC2 micro instance to learn Linux and server setup. I wish to setup Nginx as a reverse web proxy. I found a great article on mediatemple on how to do it: http://wiki.mediatemple.net/w/Using_Nginx_as_a_Reverse_Web_Proxy The directions work for most any server except for EC2.One difference between EC2 and MediaTemple is how IPs work. Overall EC2 instances do not know their elastic IP. So when following the wiki directions in the virtual hosts for instance instead of myip:80 for instance I put *:80. When just using Apache this works perfectly. In the apache virtual hosts I did "127.0.0.1:80" and in the Nginx I put *:80. Apache restarts, by Nginx provides an error that it cannot bind because the ip is already in use. If I could add an actual IP in the Nginx file it would work but since EC2 requires me to put in the asterisk it ends up conflicting with the apache virtual hosts entry. Anyone know a simple way around this (other than not using EC2) ;-) Thank you! Cheers, Christopher

    Read the article

  • iptables rules for DNS/Transparent proxy with ip exceptions

    - by SlimSCSI
    I am running a router (A Netgear WNDR3700 if that matters) with dd-wrt. For content filtering I am using OpenDNS. I wanted to make sure a user could not bypass OpenDNS by putting in their own name servers, so I have a rule to catch all DNS traffic. iptables -t nat -A PREROUTING -i br0 -p all --dport 53 -j DNAT --to $LAN_IP I did have one computer on the network I wanted to allow past OpenDNS filters. On that machine I manually set the name servers, and created another rule to allow it to pass iptables -t nat -I PREROUTING -i br0 -s 192.168.1.2 -j ACCEPT This worked well. Today, I installed a transparent proxy (squid) on the router and added these rules: iptables -t nat -A PREROUTING -i br0 -s $LAN_NET -d $LAN_NET -p tcp --dport 80 -j ACCEPT iptables -t nat -A PREROUTING -i br0 -s ! $PROXY_IP -p tcp --dport 80 -j DNAT --to $PROXY_IP:$PROXY_PORT iptables -t nat -I POSTROUTING -o br0 -s $LAN_NET -d $PROXY_IP -p tcp -j SNAT --to $LAN_IP iptables -I FORWARD -i br0 -o br0 -s $LAN_NET -d $PROXY_IP -p tcp --dport $PROXY_PORT -j ACCEPT This also works, however the 192.168.1.2 address does not get routed through squid. How can I have 192.168.1.2 (and maybe others in the future) by-pass the port 53 rules, but not the port 80 rules?

    Read the article

  • Can't reach server without proxy (website down from my home)

    - by user2128576
    I have a website hosted on Hostinger However I am experiencing problems with my wordpress site. This is really annoying. If I understood the situation right, The server is blocking me or denying access to my own website. When I visit the site with google chrome, it returns: Oops! Google Chrome could not find Same thing happens to firefox! Firefox can't find the server but when I do a check if my site is online and working through http://www.downforeveryoneorjustme.com/ it says that the site is working and up. Another thing, I access the website through a proxy, both on chrome and in firefox, and t works. Why is this? I have also recently installed the plugin Better Wp Security 5 days ago. Could the plugin have caused it? but I don't remember setting any IP's to be blocked. Also, this happens at random times, sometimes I can access it, sometimes it fails to reach the server. I am currently developing the site live. Was I blocked by the server for frequently refreshing the page? (duh, I'm a developer and I need to refresh to see changes.) or is this a problem with my ISP's DNS server? How can I resolve? and what are the possible fixes? Thanks in advance! -Jomar

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >