Search Results

Search found 5212 results on 209 pages for 'forward'.

Page 147/209 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Issue with SSL using HAProxy and Nginx

    - by Ben Chiappetta
    I'm building a highly available site using a multiple HAProxy load balancers, Nginx web serves, and MySQL servers. The site needs to be able to survive load balancer or web servers nodes going offline without any interruption of service to visitors. Currently, I have two boxes running HAProxy sharing a virtual IP using keepalived, which forward to two web servers running Nginx, which then tie into two MySQL boxes using MySQL replication and sharing a virtual IP using heartbeat. Everything is working correctly except for SSL traffic over HAProxy. I'm running version 1.5 dev12 with openssl support compiled in. When I try to navigate to the virtual IP for haproxy over https, I get the message: The plain HTTP request was sent to HTTPS port. Here's my haproxy.cfg so far, which was mainly assembled from other posts: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice # log 127.0.0.1 local0 user haproxy group haproxy daemon maxconn 20000 defaults log global option dontlognull balance leastconn clitimeout 60000 srvtimeout 60000 contimeout 5000 retries 3 option redispatch listen front bind :80 bind :443 ssl crt /etc/pki/tls/certs/cert.pem mode http option http-server-close option forwardfor reqadd X-Forwarded-Proto:\ https if { is_ssl } reqadd X-Proto:\ SSL if { is_ssl } server web01 192.168.25.34 check inter 1s server web02 192.168.25.32 check inter 1s stats enable stats uri /stats stats realm HAProxy\ Statistics stats auth admin:********* Any idea why SSL traffic isn't being passed correctly? Also, any other changes you would recommend? I still need to configure logging, so don't worry about that section. Thanks in advance your help.

    Read the article

  • Remote management interface for managing ip6tables (or an alternative firewall)

    - by Matthew Iselin
    I'm working with IPv6 and have run into an issue configuring ip6tables on our main router in order to control what can come into the network. A default DROP rule in the FORWARD section has worked well (obviously leaving ESTABLISHED,RELATED as ACCEPT) to keep internal clients' open ports from being accessed. However, running an ip6tables command for every little change is unwieldy. Whilst we are able to continue creating rules manually, I'm wondering if there's some sort of management interface we could use to create the rules quickly and easily. We're looking to be able to save time working on our firewall as well as providing a simple method for modifying rules for those who will eventually replace us. I know webmin (heavily locked down on our network, naturally) has support for modifying iptables rules, but seemingly no support for ip6tables. Something similar would be fantastic. Alternatively, suggestions for a firewall solution apart from iptables/ip6tables which can be managed remotely wouldn't be out of order. A web interface for management is certainly preferable, even if it is just a wrapper with shiny buttons over the raw config files.

    Read the article

  • Gwibber doesnt launch post upgrade

    - by Arcath
    i updated my pc from ubuntu 9.10 to 10.04 and one of the things i was looking forward too was the "me menu" and gwibber. For some reason gwibber doesnt launch at all. when i try to launch it from terminal i get: [21:02:20][arcath@Highgate ~]$ gwibber ** (gwibber:8182): WARNING **: Trying to register gtype 'WnckWindowState' as enum when in fact it is of type 'GFlags' ** (gwibber:8182): WARNING **: Trying to register gtype 'WnckWindowActions' as enum when in fact it is of type 'GFlags' ** (gwibber:8182): WARNING **: Trying to register gtype 'WnckWindowMoveResizeMask' as enum when in fact it is of type 'GFlags' Traceback (most recent call last): File "/usr/bin/gwibber", line 50, in <module> from gwibber import client File "/usr/lib/python2.6/dist-packages/gwibber/client.py", line 3, in <module> import gtk, gobject, gwui, util, resources, actions, json, gconf File "/usr/lib/python2.6/dist-packages/gwibber/gwui.py", line 2, in <module> import os, json, urlparse, resources, util File "/usr/lib/python2.6/dist-packages/gwibber/util.py", line 2, in <module> from microblog.util.couch import RecordMonitor File "/usr/lib/python2.6/dist-packages/gwibber/microblog/util/couch.py", line 10, in <module> OAUTH_DATA = desktopcouch.local_files.get_oauth_tokens() File "/usr/lib/python2.6/dist-packages/desktopcouch/local_files.py", line 323, in get_oauth_tokens oauth_token_secrets = cf.items_in_section("oauth_token_secrets")[0] File "/usr/lib/python2.6/dist-packages/desktopcouch/local_files.py", line 189, in items_in_section raise ValueError("Section %r not present." % (section_name,)) ValueError: Section 'oauth_token_secrets' not present. and i cant work out whats wrong with it. Can anyone point me in the right direction?

    Read the article

  • Trying to setup catchall mail forwarding on my rackspace's cloudspace server

    - by bob_cobb
    I'm running Ubuntu 12 Precise Pangolin and am trying to configure my server to catchall mail sent to it and forward it to my gmail address. I've been trying lots of examples online like editing my main.cf file which looks like this: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = destiny alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = destiny, localhost.localdomain, localhost relayhost = smtp.sendgrid.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 51200000 recipient_delimiter = + inet_interfaces = all inet_protocols = all In my /etc/postfix/virtual I have: @mydomain.com [email protected] @myotherdomain.com [email protected] Which isn't working when I email [email protected] or [email protected]. So I got the recommendation to add the following to my /etc/alias: postmaster:root root:[email protected] restarted postfix, and tried emailing [email protected] or [email protected] but it still won't send. Does anyone have any idea what I'm doing wrong here? I'd appreciate any help.

    Read the article

  • CNAME redirect to Wordpress blog not working (.htaccess problem)

    - by Vincent Chan
    I have two domains hosting in two different servers: domain1.com & domain2.com I would like to forward "blog.domain1.com" to "blog2.domain2.com" which is a Wordpress blog using CNAME redirect. Before I installed Wordpress. blog.domain1.com (=redirect=) blog2.domain2.com/index.htm (working fine) The browser will keep the URL (http://blog.domain1.com) even the index.htm is on domain2.com server. However, after I installed Wordpress, the browser will change the URL to (http://blog2.domain2.com) This is my current setup: On domain1.com DNS: blog.domain1.com CNAME redirect to domain2.com on domain2.com .htaccess: Options +FollowSymLinks RewriteEngine On RewriteCond %{HTTP_HOST} ^blog\.domain1\.com RewriteRule ^(.*)$ http://blog2.domain2.com/$1 [R=301,L] on blog2.domain2.com .htaccess: DirectoryIndex index.php <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine On RewriteBase /blog2/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /blog2/index.php [L] </IfModule> blog2.domain2.com is installed under domain2.com/blog2/ All I want to do is keeping the URL (blog.domain1.com) unchanged for the whole Wordpress redirect. Thanks a lot.

    Read the article

  • How to find the real IP to which IPVS is routing a virtual IP

    - by Wayne Conrad
    I'm trying to find a problem server hiding behind a virtual IP (using LVS/ipvs). I've got a test program that sends requests to the virtual IP until it gets the bad response, but how can I tell to which real IP a request to the virtual IP got routed? On the box doing the virtual IP magic, here's the virtual IP configuration (for the service I care about): IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn ... TCP 10.1.0.254:5025 nq -> 10.1.0.5:5025 Route 1 0 1 -> 10.1.0.6:5025 Route 1 0 5 -> 10.1.0.7:5025 Route 1 0 2 -> 10.1.0.9:5025 Local 1 0 3 -> 10.1.0.11:5025 Route 1 0 3 ... My client program is sending TCP requests to 10.1.0.254:5025, usually getting a good response but sometimes a bad response. With this few servers, I could send my request to each server in turn until I discover the culprit, but I wonder if that technique will scale as we add servers. What means exist for me to find out where requests got routed? Kernel: Linux 2.6.32 OS: Debian testing (whatever that's called these days). ipvsadm is version 1.25, compiled with ipvs v1.2.1

    Read the article

  • Bypass proxy authentication [closed]

    - by Diego Queiroz
    My scenario: My network has a proxy that requires interative authentication. When I access any URL, an username and password is requested to enable navigation. I do have a valid username/password (this means I have permissions to access external content). I do not have access to the proxy server (any change to the proxy server is not an option). What I need: I need to bypass the interative authentication process and make it an automated authentication process. What I do NOT need/want: I do not need/want to hack the network. I do not need/want to access unauthorized content. In other words, I just need to find a way to "save" my password in the computer (security is not a problem) to allow application that does not support this kind of interative authentication to access the internet (like non-browser software that also uses HTTP port). My guess: My guess is to develop a new proxy server that will run in the local machine (eg, a proxy for the network proxy). This proxy server will access my network proxy, authenticate and forward the content. Of course this is a last resort. I prefer to not need to develop a proxy server. Does someone know other solution? (any operating system)

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • How to configure a shortcut for an SSH connection through a SSH tunnel

    - by Simone Carletti
    My company production servers (FOO, BAR...) are located behind two gateway servers (A, B). In order to connect to server FOO, I have to open a ssh connection with server A or B with my username JOHNDOE, then from A (or B) I can access any production server opening a SSH connection with a standard username (let's call it WEBBY). So, each time I have to do something like: ssh johndoe@a ... ssh webby@foo ... # now I can work on the server As you can imagine, this is a hassle when I need to use scp or if I need to quickly open multiple connections. I have configured a ssh key and also I'm using .ssh/config for some shortcuts. I was wondering if I can create some kind of ssh configuration in order to type ssh foo and let SSH open/forward all the connections for me. Is it possible? Edit womble's answer is exactly what I was looking for but it seems right now I can't use netcat because it's not installed on the gateway server. weppos:~ weppos$ ssh foo -vv OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /Users/xyz/.ssh/config debug1: Applying options for foo debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Executing proxy command: exec ssh a nc -w 3 foo 22 debug1: permanently_drop_suid: 501 debug1: identity file /Users/xyz/.ssh/identity type -1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_rsa type 1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_dsa type 2 bash: nc: command not found ssh_exchange_identification: Connection closed by remote host

    Read the article

  • SSH X11 forwarding does not work. Why?

    - by Ole Tange
    This is a debugging question. When you ask for clarification please make sure it is not already covered below. I have 4 machines: Z, A, N, and M. To get to A you have to log into Z first. To get to M you have to log into N first. The following works: ssh -X Z xclock ssh -X Z ssh -X Z xclock ssh -X Z ssh -X A xclock ssh -X N xclock ssh -X N ssh -X N xclock But this does not: ssh -X N ssh -X M xclock Error: Can't open display: The $DISPLAY is clearly not set when logging in to M. The question is why? Z and A share same NFS-homedir. N and M share the same NFS-homedir. N's sshd runs on a non standard port. $ grep X11 <(ssh Z cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes $ grep X11 <(ssh N cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes N:/etc/ssh/ssh_config == Z:/etc/ssh/ssh_config and M:/etc/ssh/ssh_config == A:/etc/ssh/ssh_config /etc/ssh/sshd_config is the same for all 4 machines (apart from Port and login permissions for certain groups). If I forward M's ssh port to my local machine it still does not work: terminal1$ ssh -L 8888:M:22 N terminal2$ ssh -X -p 8888 localhost xclock Error: Can't open display: A:.Xauthority contains A, but M:.Xauthority does not contain M. xauth is installed in /usr/bin/xauth on both A and M. xauth is being run when logging in to A but not when logging in to M. ssh -vvv does not complain about X11 or xauth when logging in to A and M. Both say: debug2: x11_get_proto: /usr/bin/xauth list :0 2>/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 0: request x11-req confirm 0 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. I have a feeling the problem may be related to M missing in M:.Xauthority (caused by xauth not being run) or that $DISPLAY is somehow being disabled by a login script, but I cannot figure out what is wrong.

    Read the article

  • Can I use IIS to do ActiveDirectory single-sign-on for another website?

    - by brofield
    I'm trying to add Active Directory single-sign-on support to an existing SOAP server. The server can be configured to accept a trusted reverse-proxy and use the X-Remote-User HTTP header for the authenticated user. I want to configure IIS to be the trusted proxy for this service, so that it handles all of the Active Directory authentication for the SOAP server. Basically IIS would have to accept HTTP connections on port X and URL Y, do all the authentication, and then proxy the connection to a different server (most likely the same X and Y). Unfortunately, I have no knowledge of IIS or AD (so I am trying my best to learn enough to build this solution) so please be gentle. I would assume that this is not an uncommon scenario, so is there some easy way to do this? Is this sort of functionality built into IIS or do I need to build some sort of IIS proxy program myself? Is there a better option for getting the authentication done and the X-Remote-User HTTP header set than requiring IIS? Update: For example, what I am trying to create is: [CLIENT] [IIS] [AD] [SOAP-SERVER] 1. |---------------->| 2. |<--------------->|<---------->| 3. |--------------------------->| 4. |<---------------------------| 5. |<----------------| 1. POST to http://example.com/foo/bar.cgi 2. Client is not authenticated, so do authentication 3. Once validated, send request to server (X-Remote-User: {userid}) 4. Process request, send response 5. Forward response to client I need to know how to configure IIS to do the automatic authentication of the user using AD, and then to proxy the request to the actual server, sending the userid in the X-Remote-User HTTP header.

    Read the article

  • OpenWRT + OpenVPN client forwarding from lan to vpn not working

    - by Dariusz Górecki
    I've OpenWRT router with Backfire 10.03.1-rc3 (arch:brcm 2.6 kernel) I've set up an OpenVPN client connecting my router with workplace lan, and it works nicely, I can connect from router to networks (several) in workplace. My OpenVPN client uci-config looks like: config 'openvpn' 'stream_client' option 'nobind' '1' option 'float' '1' option 'client' '1' option 'reneg_sec' '0' option 'management' '127.0.0.1 31194' option 'explicit_exit_notify' '1' option 'verb' '3' option 'persist_tun' '1' option 'persist_key' '1' list 'remote' 'remote.address.cutted' option 'ca' '/lib/uci/upload/cbid.openvpn.stream_client.ca' option 'key' '/lib/uci/upload/cbid.openvpn.stream_client.key' option 'cert' '/lib/uci/upload/cbid.openvpn.stream_client.cert' option 'enable' '1' option 'dev' 'tun1' I've set the 'STREAM_VPN' Zone to allow in/out traffic, and I've added rules for zone-to-zone lan<-vpn and vpn<-lan config 'zone' option 'name' 'stream_vpn' option 'network' 'stream_vpn' option 'input' 'ACCEPT' option 'output' 'ACCEPT' option 'forward' 'REJECT' config 'forwarding' option 'src' 'lan' option 'dest' 'stream_vpn' config 'forwarding' option 'src' 'stream_vpn' option 'dest' 'lan' And interface config: config 'interface' 'stream_vpn' option 'proto' 'none' option 'ifname' 'tun1' option 'defaultroute' '0' option 'peerdns' '0' Now, from my router everything works nicely, the problem is that I cannot connect from computer inside a lan to hosts in networks provided by vpn connection :/ What I've missed, or what I'm doing wrong? And how can I force using specified DNS when connected to vpn? (I know that sever should use PUSH DNS option, but is PUSHes only routes)

    Read the article

  • Domain Controller DNS Best Practice/Practical Considerations for Domain Controllers in Child Domains

    - by joeqwerty
    I'm setting up several child domains in an existing Active Directory forest and I'm looking for some conventional wisdom/best practice guidance for configuring both DNS client settings on the child domain controllers and for the DNS zone replication scope. Assuming a single domain controller in each domain and assuming that each DC is also the DNS server for the domain (for simplicity's sake) should the child domain controller point to itself for DNS only or should it point to some combination (primary VS. secondary) of itself and the DNS server in the parent or root domain? If a parentchildgrandchild domain hierarchy exists (with a contiguous DNS namespace) how should DNS be configured on the grandchild DC? Regarding the DNS zone replication scope, if storing each domain's DNS zone on all DNS servers in the domain then I'm assuming a DNS delegation from the parent to the child needs to exist and that a forwarder from the child to the parent needs to exist. With a parentchildgrandchild domain hierarchy then does each child forward to the direct parent for the direct parent's zone or to the root zone? Does the delegation occur at the direct parent zone or from the root zone? If storing all DNS zones on all DNS servers in the forest does it make the above questions regarding the replication scope moot? Does the replication scope have some bearing on the DNS client settings on each DC?

    Read the article

  • filter / directing URLs coming onto a network

    - by Jon
    Hi all, I an not sure if this is possible or not but what i would like to do is as follows: I have one IP address (dynamic using zoneedit.com to keep it upto date). I have one webserver running my main site which is an Ubuntu machine running Apache. I also have a windows 2008 server running another site. Just to confuse things I also run part of my Apache site on the windows server, currently using proxypassreverse to get the information from it. So it looks something like this: IP 1.2.3.4 maps to mydomain.com as well as myotherdomain.com All requests that come into port 80 are forwarded to the Apache box and I use Virtualhost settings to proxy the windows sites where needed. so mydomain.com is an Apache site mydomain.com/mywindowssection is the Apache server using proxypassreverse to get part of the site from the Windows server myotherdomain.com uses Apache and proxypassreverse to get the whole site. What I would like to be able to do is forward all http requests that come into my network to one machine that figures out who should be serving that content. so: mydomain.com would go to the Apache machine myotherdomain.com would go the windows machine. I am just in the process of setting up an Astaro gateway (never done this before so taking a while to configure) as my firewall, dns, dhcp etc, don't know if this can handle it. I have the capacity to run a VM on the network if a seperate box would be needed for this process as well. Thanks for any and all feedback. Jon

    Read the article

  • Scroll wheel causes browsers + Windows explorer to go back

    - by KaptajnKold
    I use Windows 7 on a VirtualBox VM on a Mac. Lately, when I'm using any browser (IE9, Chrome or FireFox or even Windows Explorer), use of the scroll wheel followed by any movement of the mouse cursor causes the browser to go back. Very annoying. This happens when I use the scroll wheel on a USB connected mouse (brand/model unknown, since I don't have it in front of me as I write this) or when I use two-finger scrolling on the trackpad when no mouse is connected. When I connect from my VM to a remote Windows box (Windows Server 2008), I experience the same problem. I have tried rebooting the VM to no avail. I am not sure when the problem started exactly. It may or may not have been after I connected the USB mouse for the first time, but trying to unplug it and then rebooting didn't help. I have tried to google for a solution, but all I've found are people who accidentally pressed the shift key while scrolling, which will cause the browser to go backward or forward in the browser history. This however is not the problem I'm having. To be clear, in my case the browser only goes back when I move the mouse after I've used the scroll wheel. I'm at my wits end :(

    Read the article

  • CentOS centralised logging, syslogd, rsyslog, syslog-ng, logstash sender?

    - by benbradley
    I'm trying to figure out the best way to setup a central place to store and interrogate server logs. syslog, Apache, MySQL etc. I've found a few different options but I'm not sure what would be best. I'm looking for something that is easy to install and keep updated on many virtual machines. I can add it to a VM template going forward but I'd also like it to be easy to install to keep the VM complexity down. The options I've found so far are: syslogd syslog-ng rsyslog syslogd/syslog-ng/rsyslog to logstash/ElasticSearch logstash agent in each log "client" to send to Redis/logstash/ElasticSearch And all sorts of permutations of the above. What's the most resilient and light from the log "client" perspective? I'd like to avoid the situation where log "clients" hang because they are unable to send their logs to the logging server. Also I would still like to keep local logging and the rotation/retention provided by logrotate in place. Any ideas/suggestions or reasons for or against any of the above? Or suggestions of a different structure entirely? Cheers, B

    Read the article

  • iptables-restore: line 1 failed

    - by Doug
    Hello, I am new to servers, and I was following this guide and it failed on the first command instructed. Could anyone give me a hand? http://wiki.debian.org/iptables ~ZORO~:/etc# iptables-restore < /etc/iptables.test.rules iptables-restore: line 1 failed Edit: iptables.test.rules ~ZORO~:/etc# cat /etc/iptables.test.rules *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT -i ! lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You could modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allows SSH connections for script kiddies # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE -A INPUT -p tcp -m state --state NEW --dport 30000 -j ACCEPT # Now you should read up on iptables rules and consider whether ssh access # for everyone is really desired. Most likely you will only allow access from certain IPs. # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls (access via 'dmesg' command) -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy: -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT

    Read the article

  • Connection refused after installing vsftp on Ubuntu 8.04 with fail2ban

    - by Patrick
    I have been using an Ubuntu 8.04 server with fail2ban for a while now (12+ months) and using ftp over SSH without any problems. I have a new user that needs to put files onto the server from an IP modem. I have installed vsftp (sudo apt-get install vsftp) and everything installed correctly. I have created an ftp user on the server following this guide. Whenever I try to connect to the server with my ftp program (filezilla) I get an immediate response of: Connection attempt failed with "ECONNREFUSED - Connection refused by server". I have looked into fail2ban and cannot find any problems. The iptables setup is: Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- anywhere anywhere VSFTP config file (commented lines removed) listen=YES anonymous_enable=NO local_enable=YES write_enable=YES dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=[username] secure_chroot_dir=/var/run/vsftpd pam_service_name=vsftpd rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key Any ideas on what is preventing access to the server?

    Read the article

  • Multiple Servers + One MailServer

    - by theomega
    Hy, I got several Linux-Servers (running Debian) where different services run: Database-Servers, Webservers, Applicationsservers, Tools and so on. All Servers are connected to the same internal network. There is also one special Server which is the Mail-Server: All Mailaccounts are stored on this server, it is also the outbound Mailserver for all the other servers. I want all Mails for all servers to get saved on the Mailserver. For example if an cron-job fails on one of the web-servers the mail should not be delivered to the local user but instead to the Mailserver so I get a centralized place for mail storage. How do you set up this scenario? My current setup is: Using postfix as MTA on the Mailserver and using ssmtp on all the other servers. SSMTP is configured to send the mails to the Mailserver. The Mailserver is configured to allow the whole internal network to relay mails using itself. Is this the right way to choose? I also thought about setting up a MTA (postfix) on every server and configure it somehow to forward the mails. What would be the advantage of this solution?

    Read the article

  • Firebox 1250e Core Failing?

    - by Noah
    We have 2 Firebox 1250e Core firewall boxes in our production environment, serving as an active and passive mode. A few months back, the active box was flashing a warning light, so our consultant removed it, and plugged it in to a test network. Everything appeared to be working fine, so he reloaded it into the production environment, and we didn't see any other issues. Fast forward to last week, and out network was constantly dropping connections over RDC, timing out, and performing as if there was a traffic issue. I turned off the production box and everything began to work fine immediately. At this point though, I'm not sure how to proceed. Should the box be completely replaced? Is there any recommended testing we could do to determine if there is a failure of some type with this device? Should we try upgrading the software on it? I know the environment isn't the issue, since the passive box (which is now the active one) is working fine. We'd like to have 2 in production though for safety failover purposes. I am not a network admin, but am hoping someone here might be able to provide some guidance.

    Read the article

  • How to remove Character Style in Word 2003

    - by joe
    I am copying text into a Word document that has its styles protected. Some of the text has a Character style applied to it that I would like to remove. I can change the style and it looks okay visually, but when you click inside the text itself you can see in the style menu that the Character style is still applied. I try to change the style using the style drop down but the Character style won't go away even if I change it to the Paragraph style. Does anyone know what I am talking about and/or have any techniques for removing Character styles? I am looking forward to your responses. Let me know if I need to clarify. Thanks! UPDATE: My issue is that the Paragraph style is changing correctly, but the Character style remains applied even though the text is displayed as if the style is not applied. See http://office.microsoft.com/en-us/word/HA011876141033.aspx if you are not sure what I meen by Character styles. There are four types of styles — paragraph, character, list, and table (list and table styles are new as of Word 2002). However, the majority of styles you'll use are paragraph styles.

    Read the article

  • Permission problem with Git (over SSH) on FreeBSD

    - by vpetersson
    We're having permission problem with Git on FreeBSD. The setup is fairly straight forward. We have a few different repos on the same server. For simplicity, let's say they reside in /git/repo1 and /git/repo2. Each repo is owned by the user 'git' and a self-titled group (eg. repo1). The repo is configured with g+rwX access. Every user who commits to the repository is also member of the group for the repo (eg. repo1). The Git repositories all have 'sharedRepository = group' set. So far so good, all users can check out the code from the repositories, and the first user can commit without any problem. However, when the next user tries to commit to the repositories, he will receive a permission error. We've been banging our heads with this issue for some time now, and the only way we've managed to resolve it is by running the following script between commits (which is obviously very inconvenient): find /git/repo1 -type d -exec chmod g+s {} \; chmod -R g+rwX /git/repo1 chown -R git:repo1 /git/repo1/ cd /git/repo1 git gc Anyone got a clue to where the problem lies?

    Read the article

  • can't connect to vsftpd from outside network

    - by rick
    i know this has been asked many times before, but nothing seems to resolve my issue. i have vsftpd running on ubuntu 10.04. i can connect with ftp localhost on the machine. i can connect from another machine in my network. i just cannot connect from outside. the machine is behind an airport extreme managed by airport utility on a mac. 21 is open as per nmap: macmini:~$ nmap localhost Starting Nmap 5.21 ( http://nmap.org ) at 2011-04-10 23:49 EDT Nmap scan report for localhost (127.0.0.1) Host is up (0.00045s latency). Hostname localhost resolves to 2 IPs. Only scanned 127.0.0.1 rDNS record for 127.0.0.1: localhost.localdomain Not shown: 997 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 631/tcp open ipp netstat says 21 is listening: macmini:~$ netstat -lep --tcp | grep ftp (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 *:ftp *:* LISTEN iptables: macmini:~$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination when i try to connect from my external IP (or a dyndns name which resolves there) it times out. ("control connection timed out") as i know very little about networking, i feel like something may jump out as clearly wrong?

    Read the article

  • Any non-custom way to manage iptables with fail2ban and libvirt+kvm?

    - by Peter Hansen
    I have an Ubuntu 9.04 server running libvirt/kvm and fail2ban (for SSH attacks). Both libvirt and fail2ban integrate with iptables in different ways. Libvirt uses (I think) some XML config and during startup (?) configures forwarding to the VM subnet. Fail2ban installs a custom chain (probably at init) and periodically modifies it to ban/unban probable attackers. I also need to install my own rules to forward various ports to servers running in VMs and on other machines, and set up rudimentary security (e.g. drop all INPUT traffic except the few ports I want open), and of course I'd like the ability to add/remove rules safely without restarting. It seems to me iptables is a powerful tool that's sorely lacking some sort of standardized way of juggling all this stuff. Every project, and every sysadmin, seems to do it differently! (And I think there's lots of "cargo cult" admin going on here, with people cloning crude approaches like "use iptables-save like so".) Short of figuring out the gory details of exactly how both of these (and potentially other) tools manipulate the netfilter tables, and developing my own scripts or just manually executing iptables commands, is there any way to safely work with iptables while not breaking the functionality of these other tools? Any nascent standards or projects defined to bring sanity to this area? Even a helpful web page I missed that might cover at least these two packages together?

    Read the article

  • How many reverse proxies (nginx, haproxy) is too many?

    - by Alysum
    I'm setting up a HA (high availability) cluster using nginx, haproxy & apache. I've been reading great things about nginx and haproxy. People tend to choose one or the other but I like both. Haproxy is more flexible for load balancing than nginx's simple round robin (even with the upstream-fair patch). But I'd like to keep nginx for redirecting non-https to https among other things right at the point of entry to the cluster. On the other hand, nginx is a lot faster for serving static contents and would reduce the load on the powerful apache which loves to eat a lot of RAM! Here is my planned setup: Load balancer: nginx listens on port 80/443 and proxy_forwards to haproxy on 8080 on the same server to load balance between the multiple nodes. Nodes: nginx on the node listens to requests coming from haproxy on 8080, if the content is static, serve it. But if it's a backend script (in my case PHP), proxy forward to apache2 on the same node server listenning on a different port number. Technically this setup works but my concerns are whether having the requests going through several proxies is going to slow down requests? Most of the requests will be PHP requests as the backends are services (which means groing from nginx - haproxy - nginx - apache). Thoughts? Cheers

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >