Search Results

Search found 9845 results on 394 pages for 'ntp servers'.

Page 370/394 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • RDP exits immediately after connecting to Windows Server 2008 R2

    - by carpat
    Background: I recently got a Windows cloud VPS server. I don't have much experience with server admin (I'm a programmer), and what little I do have is with linux servers. Ever since getting the server I've been having issues with RDP. I can connect about two or three times, after which point I can't connect until one of the tech guys "fixes" it (see below). When I connect, I can stay connected for hours with no problem. When the problem connecting starts, the first time I try to log in, the remote desktop window pops up, starts connecting, and then exits with "Your Remote Desktop session has ended". After that, for about 10-20 minutes if I try to connect again, the connections times out with Remote Desktop can't connect to the computer for one of these reasons: 1) Remote access on the server is not enabled 2) The remote computer is turned off 3) The remote computer is not available on the network then goes back to connecting once and immediately disconnecting. All of the updates are installed. The firewall has been correctly configured to let RDP traffic through. The remote setting is "Allow connections from computers running any version of Remote Desktop". I tried creating a second user, and when I can't connect, I can't connect to that user either. I've tried both soft and hard reboots, neither of which help. I've tried connecting from two different computers (both running Windows 7) from two different networks (work and home), and the behavior is the same. Everything else on the server continues to run fine (IIS-served http pages, Tomcat-served java pages, svn, ping). The "fix" that the tech guys supply is simply logging into the console on their end, after which point I can connnect 2 or 3 times again. The event viewer on the server has "authentication failure" (or something similar) events generated when I attempt to log in and can't. I can't get to the actual event at the moment as I'm currently in the can't connect stage, and waiting for the techs to log in. But when I searched for the event earlier this morning I couldn't find anything useful. Can anyone help?

    Read the article

  • I can't connect to mysql on a remote server

    - by eisaacson
    I'm trying to connect from an Ubuntu server to a RHEL6 server using mysql. I've tried telneting into the server as well as trying to connect with mysql. I've tried commenting out the bind-address but didn't have any success with that either. I don't get an error code or anything with telnet. It just fails after a minute or so. With mysql, I get this error code ERROR 2003 (HY000): Can't connect to MySQL server on 'SERVER_IP' (111). "SERVER_IP" is of course a placeholder where actual error gives that actual IP. I've included my my.cnf as well as well as my iptables from the destination server. On Destination Server... my.cnf: [mysqld] bind-address=0.0.0.0 tmp_table_size=512M max_heap_table_size=512M sort_buffer_size=32M read_buffer_size=128K read_rnd_buffer_size=256K table_cache=2048 key_buffer_size=512M thread_cache_size=50 query_cache_type=1 query_cache_size=256M query_cache_limit=24M #query_alloc_block_size=128 #query_cache_min_res_unit=128 innodb_log_buffer_size=16M innodb_flush_log_at_trx_commit=2 innodb_file_per_table innodb_log_files_in_group=2 innodb_buffer_pool_size=32G innodb_log_file_size=512M innodb_additional_mem_pool_size=20M join_buffer_size=128K max_allowed_packet=100M max_connections=256 wait_timeout=28800 interactive_timeout=3600 # modify isolation method for faster inserting. # Do not uncomment the line below unless you understand what this does. # transaction-isolation = READ-COMMITTED # do not reverse lookup clients skip-name-resolve #long_query_time=6 #log_slow_queries=/var/log/mysqld-slow.log #log_queries_not_using_indexes=On #log_slow_admin_statements=On datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 #Added by Magento ECG long_query_time=1 slow_query_log [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid iptables: :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 225 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp -i eth1 --dport 11211 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT sudo netstat -ntpl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:2123 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:1581 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN - tcp 0 0 :::11211 :::* LISTEN - tcp 0 0 :::22 :::* LISTEN - tcp 0 0 :::225 :::* LISTEN -

    Read the article

  • SMBfs mounting OK, listing OK, Read KO, smbclient OK

    - by Kwaio
    I've tried to make the title the most meaningfull I could but it still looks ugly. The premises. We are using RHEL3-U8 as OS on most servers here, don't ask me why or suggest to upgrade, it's not on today's schedule. That means kernel used is 2.4.21 I have no access to the remote server, but I know it is a netApp NAS rack. $> smbclient --version Version 3.0.9-1.3E.9 Here is the /etc/fstab line : //NASHOSTNAME/share /mnt/mydir smbfs ro,uid=123,gid=123,workgroup=XXXX,credentials=/somefile 0 0 Here is the following mount output line //NASHOSTNAME/share on /mnt/mydir type smbfs (0) The symptoms. I can list the share without problems, even cd in there. The issue appears if I try to read any file : $> cat /mnt/mydir/fileX.txt cat: /mnt/mydir/fileX.txt: Input/output error In the system logs (/var/log/kernel for example) the following errors appear. Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_open: fileX.txt open failed, result=-5 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_open: fileX.txt open failed, result=-5 Jul 30 15:40:02 hostname kernel: smb_readpage_sync: fileX.txt open failed, error=-5 The ERRHRD code 0x001F error is "General hardware failure" although it seems samba sometimes uses it for a different purpose, see http://www.ubiqx.org/cifs/SMB.html [Strange behaviour Alert] Additionnal informations : There is another SMB mountpoint on the system pointing to a (linux) host using samba and this one works. What I have tried. I have tried adding debug=4 to the mounting options and remounting the share and the logs still look the same. I have tried to mount the share with smbclient and I am able to fetch files with the get command. Both targets are in the same subnet, so network problem should be out, even if the LAN goes through a VPN with optimizers, MTU has already been decreased to 1450. I can also mount the share through NFS but then the files are all root.root 700 and I need to read them with another user...

    Read the article

  • .htaccess ignored, SPECIFIC to EC2 - not the usual suspects

    - by tedneigerux
    I run 8-10 EC2 based web servers, so my experience is many hours, but is limited to CentOS; specifically Amazon's distribution. I'm installing Apache using yum, so therefore getting Amazon's default compilation of Apache. I want to implement canonical redirects from non-www (bare/root) domain to www.domain.com for SEO using mod_rewrite BUT MY .htaccess FILE IS CONSISTENTLY IGNORED. My troubleshooting steps (outlined below) lead me to believe it's something specific to Amazon's build of Apache. TEST CASE Launch a EC2 Instance, e.g. Amazon Linux AMI 2013.03.1 SSH to the Server Run the commands: $ sudo yum install httpd $ sudo apachectl start $ sudo vi /etc/httpd/conf/httpd.conf $ sudo apachectl restart $ sudo vi /var/www/html/.htaccess In httpd.conf I changed the following, in the DOCROOT section / scope: AllowOverride All In .htaccess, added: (EDIT, I added RewriteEngine On later) RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] Permissions on .htaccess are correct, AFAI can tell: $ ls -al /var/www/html/.htaccess -rwxrwxr-x 1 git apache 142 Jun 18 22:58 /var/www/html/.htaccess Other info: $ httpd -v Server version: Apache/2.2.24 (Unix) Server built: May 20 2013 21:12:45 $ httpd -M Loaded Modules: core_module (static) ... rewrite_module (shared) ... version_module (shared) Syntax OK EXPECTED BEHAVIOR $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 ACTUAL BEHAVIOR $ curl -I domain.com HTTP/1.1 200 OK Date: Wed, 19 Jun 2013 12:34:10 GMT Server: Apache/2.2.24 (Amazon) Connection: close Content-Type: text/html; charset=UTF-8 TROUBLESHOOTING STEPS In .htaccess, added: BLAH BLAH BLAH ERROR RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] My server threw an error 500, so I knew the .htaccess file was processed. As expected, it created an Error log entry: [Wed Jun 19 02:24:19 2013] [alert] [client XXX.XXX.XXX.XXX] /var/www/html/.htaccess: Invalid command 'BLAH BLAH BLAH ERROR', perhaps misspelled or defined by a module not included in the server configuration Since I have root access on the server, I then tried moving my rewrite rule directly to the httpd.conf file. THIS WORKED. This tells us several important things are working. $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 HOWEVER, it is bothering me that it didn't work in the .htaccess file. And I have other use cases where I need it to work in .htaccess (e.g. an EC2 instance with named virtual hosts). Thank you in advance for your help.

    Read the article

  • Routing with VPN and asymmetric communication

    - by Louis
    I'm stumbling on a problem that requires your advice. Keywords : networking, route, openVPN Problem : I have a local network with several physical servers and VMs. These machines have ip's in the range 10.10.x.x. I can access these machines from the Internet with the help of openVPN. These machines can : access each other within the local 10.10.x.x subnet access the Internet via the VPN can themselves be accessed (via SSH) from the Internet via the VPN. There is one machine however that behaves strangely and I don't know why. I can SSH into this machine from anywhere via SSH and I can also PING it from anywhere (including the Internet). However from this machine (i.e. when logged into it) I cannot access the Internet or ping machines outside the local network. In other words it will not go beyond the VPN. My question is why? Here are some technical details: The machine's Network Config (running Debian 6.0.3): allow-hotplug eth0 iface eth0 inet static address 10.10.10.200 netmask 255.255.0.0 network 10.10.10.0 broadcast 10.10.10.255 gateway 10.10.10.200 The machine's Routing : Destination Gateway Genmask Flags MSS Window irtt Iface 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 10.10.0.0 10.10.10.250 255.255.0.0 UG 0 0 0 eth0 10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 10.10.10.250 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.10.10.200 0.0.0.0 UG 0 0 0 eth0 The VPN's Network Config (running Debian 6.0.3): # This is the local network interface auto eth1 allow-hotplug eth1 iface eth1 inet static address 10.10.10.250 netmask 255.255.0.0 broadcast 10.10.10.255 gateway 10.10.10.250 The VPN's routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.10.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tun0 private 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 10.10.10.250 0.0.0.0 UG 0 0 0 eth1 0.0.0.0 private 0.0.0.0 UG 0 0 0 eth0 net.ipv4.ip_forward = 1 on both machines. there are no iptables set anywhere. Thanks in advance for any feedback.

    Read the article

  • Load balancers, multiple data centers and url based routing

    - by kunkunur
    There is one data center - dc1. There is a business need to setup another data center - dc2 in another geography and there might be more in the future say dc3. Within the data center dc1: There are two web servers say WS1 and WS2. These two webservers do not share anything currently. There isnt any necessity foreseen to have more webservers within each dc. dc1 also has a local load balancer which has been setup with session stickiness. So if a user say u1 lands on dc1 and if the load balancer decides to route his first request to WS1 then from there on all u1's requests will get routed to WS1. Local load balancer and webservers are invisible to the user. Local load balancer listens to the traffic on a virtual ip which is assigned to the virtual cluster of webservers ws1 and ws2. Virtual ip is the ip to which the host name is resolved to in the DNS. There are no client specific subdomains as of now instead there is a client specific url(context). ex: www.example.com/client1 and www.example.com/client2. Given above when dc2 is onboarded I want to route the traffic between dc1 and dc2 based on the client. The options that I have found so far are. Have client specific subdomains e.g. client1.example.com and client2.example.com and assign each of them with the virtual ip of the data center to which I want to route them. or Assign www.example.com and www1.example.com to first dc i.e. dc1 and assign www2.example.com to dc2. All requests will first get routed to dc1 where WS1 and WS2 will redirect the user to www1.example.com or www2.example.com based on whether the url ends with /client1 or /client2. I need help in the following If I setup a global load balancer between dc1 and dc2 do I have any alternative solutions. That is, can a global load balancer route the traffic based on the url ? Are there drawbacks to subdomain based solutions compared to www1 solution? With www1 solution I am worried that it creates a dependency on dc1 atleast for the first request and the user will see that he is getting redirected to a different url.

    Read the article

  • Have a server, need to figure out a method of backup

    - by PolishHurricane
    My company has an older Dell 2650 server running ArchLinux x64: http://www.dell.com/downloads/global/products/pedge/en/2650_specs.pdf (2 x 2.4GHz Intel Xeon w/around 3287 RAM according to "free -m") We use it to host our internal company site and to post some information from our orders to and we'd like the ability to keep it up as much as possible. What we require: - It needs to always be functional from 8am to 4pm for our data entry person to use it and others to do other things required on it. - If it goes down, we need a quick way to get the machine running again. - If it goes down, we would like to have the data backed up. Some of the major problems include: - The servers old and it may have memory issues - We don't know when one of the hard drives could fail - Our power goes out here once in a while We have a battery backup, but that's pretty much it and it's not for long term. If the server does go down, we have another system in place to store order information that comes in while it's down and repost it when it's back, but we need it up during the day. So we're wondering, what should we get for options? These are the things we thought of, sort of: Setup RAID 1, but that would involve wiping everything right? If we do that, how would we transfer the data over without messing up the server? We could buy an extra server or 2 off eBay for $100, the same model, is that practical or should we get something else? Should we buy a PC or another better server and host off that because it is if anything easier to exchange parts? Should we keep extra parts handy incase it implodes? Should we buy/use backup software? We hear drobo's are cool, but suck. Perhaps there is a software solution to this problem that backs up to another machine or gets us up and running again quickly. Also, if we are to purchase hardware, what is decent? Does anybody know of one for ArchLinux/Linux? We both know a ton about computers but we're kind of unsure what step to take with this, especially with this type of server. Thanks

    Read the article

  • IIS6 Log time recording problems

    - by Hafthor
    On three separate occasions on two separate servers at nearly the same times, 6.9 hours seemingly went by without any data being written to the IIS logs, but, on closer inspection, it appears that it was all recorded all at once. Here's the facts as I know them: Windows Server 2003 R2 w/ IIS6 Logging using GMT, server local time GMT-7. Application was still operating and I have SQL data to prove that Time gaps appear in log file, not across two # headers appear at gap Load balancer pings every 30 seconds No caching Here's info on a particular case: an entry appears for 2009-09-21 18:09:27 then #headers the next entry is for 2009-09-22 01:21:54, and so are the next 1600 entries in this log file and 370 in the next log file. about half of the ~2000 entries on 2009-09-22 01:21:54 are load balancer pings (est. at 2/min for 6.9hrs = 828 pings) then entries are recorded as normal. I believe that these events may coincide with me deploying an ASP.NET application update into those machines. Here's some relevant content from the logs in question: ex090921.log line 3684 2009-09-21 17:54:40 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:11 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:42 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:13 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:45 GET /ping.aspx - 80 404 0 0 3733 122 0 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-21 18:04:37 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 3078 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 109 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 3828 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 ... continues until line 5449 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 <eof> ex090922.log #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-22 00:00:16 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 ... continues until line 367 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:30 GET /ping.aspx - 80 200 0 0 277 122 0 ... back to normal behavior Note the seemingly correct date/time written to the #header of the new log file. Also note that /ping.aspx returned 404 then switched to 200 just as the problem started. I rename the "I'm alive page" so the load balancer stops sending requests to the server while I'm working on it. What you see here is me renaming it back so the load balancer will use the server. So, this problem definitely coincides with me re-enabling the server. Any ideas?

    Read the article

  • Which hardware to VM ratio for Build-Server virtualization?

    - by Martin
    Let's start with saying that I'm a total noob wrt. to server virtualization. That is, I use VMs often during development, but they're simple desktop machine things for me. Now to my problem: We have two (physical) build servers, one master, one slave running Jenkins to do daily tasks and build (Visual C++ Builds) our release packages for our software. As such these machines are critical to our company, because we do lot's releases and without a controlled environment to create them, we can't ship fixes. (And currently there's no proper backup of these machines in place, because they do not hold any data as such - it just would be a major pain to setup them again should they go bust. (But setting up backup that I'd know would work in case of HW failure would even be more pain, so we have skipped that until now.)) Therefore (and for scaling purposes) we would like to go virtual with these machines. Outsourcing to the cloud is not an option, not at all, so we'll have to use on-premises hardware and VM hosts. Each Build-Server (master or slave) is a fully configured (installs, licenses, shares in case of the master, ...) Windows Server box. I would now ideally like to just convert the (two) existing physical nodes to VM images and run them. Later add more VM slave instances as clones of the existing ones. And here begin my questions: Should I go for one VM per one hardware-box or should I go for something where a single hardware runs multiple VMs? That would mean a single point of failure hardware wise and doesn't seem like a good idea ... or?? Since we're doing C++ compilation with Visual Studio, I assume that during a build the hardware (processor cores + disk) will be fully utilized, so going with more than one build-node per hardware doesn't seem to make much sense?? Wrt. to hardware options, does it make any difference which VM software we use (VMWare, MS, Virtualbox, ... ?) (We're using Windows exclusively for our builds.) Regarding budget: We have a normal small company (20 developers) budget for this. ;-) That is, if it's going to cost a few k$ it's going to cost. If it's free - the better. I strongly prefer solutions where there's no multi-k$ maintenance costs per year.

    Read the article

  • turn off disable the performance cache

    - by jessie
    OK I run a streaming website and my CMS is giving me an error when uploading videos "Failed To Find Flength File" ok so I did some research. The answer I got from the coder was below. I did do all that, but the only thing I could not do is turn off what he refers to as performance cache, talked about in the last sentence... I am on a Cent OS Assuming the script is set up properly, you are probably dealing with some kind of write-caching. Some servers perform write-caching which prevents writing out the flength file or the entire CGITemp file during the upload. The flength file or the CGITemp file do not actually hit the disk until the upload is complete, making it worthless for reporting on progress during the upload. This may be fixed using a .htaccess file assuming your host supports them. Here is a link to an excellent tutorial on using .htaccess files. I strongly recommend giving it a quick read before attempting to install your own .htaccess file. 1. A mod_security module for Apache. To fix it just create a file called .htaccess (that's a period followed by "htaccess") and put the following lines in that file. Upload the file into the directory where the Uber-Uploader CGI ".pl" scripts resides, or in some directory above it (like your server's DOCUMENT_ROOT, i.e. the top-level of your webspace). htaccess files must be uploaded as ASCII mode, not BINARY. You may need to CHMOD the htaccess file to 644 or (RW-R--R--). # Turn off mod_security filtering. SecFilterEngine Off # The below probably isn't needed, # but better safe than sorry. SecFilterScanPOST Off If the above method does not work, try putting the following lines into the file SetEnvIfNoCase Content-Type \ "^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads" mod_gzip_on No 2. "Performance Cache" enabled on OS X SERVER. If you're running OS X Server and the progress bar isn't working, it could be because of "performance caching." Apparently if ANY of your hosted sites are using performance caching, then by default, all sites (domains) will attempt to. The fix then is to disable the performance cache on all hosted sites.

    Read the article

  • Reach self hosted server from LAN

    - by Freefri
    I have a self hosted server with Apache2 pointed with the domain example.com. I have also some virtual servers www.example.com, cloud.examle.com, etc. This server is in my LAN, and when I try to acces to my server within the lan throw www.examle.com y get my router's configuration page. From outside the LAN www.example.com and cloud.examle.com works properly. From inside the LAN 192.168.1.33 (server internal IP) shows the default webpage (www.examle.com), but I can not get cloud.examle.com I also have a LAN name server in 192.168.1.33 with bind9. I set up my gateway 192.168.1.1 with my LAN-NS as primary NS I solve this problem creating a new dns zone in the NS. This are my config files: ;ZONE-1 $ORIGIN . $TTL 86400 ; 1 day home.lan. IN SOA server.home.lan. hostmaster.home.lan. ( 2008080901 ; serial 8H ; refresh 4H ; retry 4W ; expire 1D ; minimum ) home.lan. IN NS server.home.lan. $ORIGIN home.lan. ; Set the address for localhost.home.lan localhost IN A 127.0.0.1 router IN A 192.168.1.1 server IN A 192.168.1.33 mypc IN A 192.168.1.132 ;ZONE-2 $ORIGIN . $TTL 86400 ; 1 day example.com. IN SOA www.example.com hostmaster.home.lan. ( 2008080902 ; serial 8H ; refresh 4H ; retry 4W ; expire 1D ; minimum ) example.com. IN NS 192.168.1.33 $ORIGIN examle.com. localhost IN A 127.0.0.1 www IN A 192.168.1.33 cloud IN A 192.168.1.33 My DNS and my names are working properly now My question are: What do you think about my solution? Can I change the A zone with CNAME to server.home.lan (this is the domain in the LAN to the server)? How can I set a default IP for all my whatever.example.com?

    Read the article

  • Weird nfs performance: 1 thread better than 8, 8 better than 2!

    - by Joe
    I'm trying to determine the cause of poor nfs performance between two Xen Virtual Machines (client & server) running on the same host. Specifically, the speed at which I can sequentially read a 1GB file on the client is much lower than what would be expected based on the measured network connection speed between the two VMs and the measured speed of reading the file directly on the server. The VMs are running Ubuntu 9.04 and the server is using the nfs-kernel-server package. According to various NFS tuning resources, changing the number of nfsd threads (in my case kernel threads) can affect performance. Usually this advice is framed in terms of increasing the number from the default of 8 on heavily-used servers. What I find in my current configuration: RPCNFSDCOUNT=8: (default): 13.5-30 seconds to cat a 1GB file on the client so 35-80MB/sec RPCNFSDCOUNT=16: 18s to cat the file 60MB/s RPCNFSDCOUNT=1: 8-9 seconds to cat the file (!!?!) 125MB/s RPCNFSDCOUNT=2: 87s to cat the file 12MB/s I should mention that the file I'm exporting is on a RevoDrive SSD mounted on the server using Xen's PCI-passthrough; on the server I can cat the file in under seconds ( 250MB/s). I am dropping caches on the client before each test. I don't really want to leave the server configured with just one thread as I'm guessing that won't work so well when there are multiple clients, but I might be misunderstanding how that works. I have repeated the tests a few times (changing the server config in between) and the results are fairly consistent. So my question is: why is the best performance with 1 thread? A few other things I have tried changing, to little or no effect: increasing the values of /proc/sys/net/ipv4/ipfrag_low_thresh and /proc/sys/net/ipv4/ipfrag_high_thresh to 512K, 1M from the default 192K,256K increasing the value of /proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_max to 1M from the default of 128K mounting with client options rsize=32768, wsize=32768 From the output of sar -d I understand that the actual read sizes going to the underlying device are rather small (<100 bytes) but this doesn't cause a problem when reading the file locally on the client. The RevoDrive actually exposes two "SATA" devices /dev/sda and /dev/sdb, then dmraid picks up a fakeRAID-0 striped across them which I have mounted to /mnt/ssd and then bind-mounted to /export/ssd. I've done local tests on my file using both locations and see the good performance mentioned above. If answers/comments ask for more details I will add them.

    Read the article

  • What router hardware or software should be used when multiple public IPs are routed into the same LAN?

    - by lcbrevard
    I am looking for recommendations to replace a set of consumer grade (Linksys, Netgear, Belkin) routers with something that can handle more traffic while routing more than one static public IP into the same LAN address space. We have a block of static public IPs, 5 usable, with Comcast Business. Currently four of them are in use for: General office access Web server Mail and DNS servers Download and backup web server for separate business All systems (a mixture of physical and virtual) are in the same LAN address space (10.x.y.0/24) to enable easy access between them inside the office. There are 30 or more systems in use depending on which virtual machines are currently active. We have a mixture of Windows, Linux, FreeBSD, and Solaris. Currently a separate consumer grade router is used for each of the four static addresses, with its WAN address set to the specific static address and a different gateway address for each: uses 10.x.y.1 - various ports are forwarded to various LAN IPs on systems with gateway 10.x.y.1 uses 10.x.y.254 - port 80 is forwarded to a server with gateway 10.x.y.254 uses 10.x.y.253 - ports for mail and dns are forwarded to a server with gateway 10.x.y.253 uses 10.x.y.252 - ports as needed are forwarded to server with gateway 10.x.y.252 Only router 1. is allowed to serve DHCP and address reservation based on the MAC is used for most of the internal "server" IP addresses so they are at fixed values. [Some are set static due to limitations in the address reservation capabilities of router 1.] And, yes, this really does work! But... I am looking for: better DHCP with more capable address reservation higher capacity so I don't have to periodically power cycle the routers One obvious improvement would be to have a real DHCP server and not use a consumer grade router for that purpose. I am torn between buying a "professional" router such as Cisco or Juniper or Sonic Wall verus learning to configure some spare hardware to perform this function. The price goes up extremely rapidly with capabilities for commercial routers! Worse, some routers require licensing based on the number of clients - a disaster in our environment with so many virtual machines. Sorry for such a long posting but I am getting tired of having to power cycle routers and deal with shifting IP addresses afterwards!

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment

    - by user69508
    (Please forgive if this is posted in an incorrect forum. We didn’t know exactly where to post it.) We have an ASP.NET Web API single page application - a browser-based app running in IIS to serve up HTML5/CSS3/JavaScript, which talks to the ASP.NET Web API endpoint only to access a database and transfer JSON data. Everything is working great in our development environment - that is, we have one Visual Studio solution with an ASP.NET Web API project and two class library projects for data access. While development and testing on development boxes, using IIS Express to a localhost:port to run the site and access the Web API, everything is fine. Now we need to move it to a production environment (and we’re having problems - or just not understanding what needs to be done). The production environment is all internal (nothing will be exposed on the public Internet). There are two domains. One domain, the corporate domain, is where all users login normally. The other domain, the process domain, contains the SQL Server instance that our app and Web API will need to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. So, I guess what they want is: corp domain (end users) <– firewall (open port 80) <– DMZ (web server running IIS) <– firewall (open port 80 or 1433????) <– process domain (IIS for Web API and SQL Server) We’re developers and don’t really understand all the networking aspects, so we’re wondering how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code (HTML5/CSS3/JavaScript/images/etc.) is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Or, does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to "http://server/appname/api/getitmes"? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? I’m sure there are other questions, but we’ll start with these. Thanks!!! (Note: the servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.)

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • OpenVPN Chaining

    - by noderunner
    I'm trying to set up an OpenVPN "chain", similar to what is described here. I have two separate networks, A and B. Each network has an OpenVPN server using a standard "road warrior" or "client/server" approach. A client can connect to either one for access to the hosts/services on that respective network. But server A and B are also connected to each other. The servers on each network have a "site-to-site" connection between the two. What I'm trying to accomplish, is the ability to connect to network A as a client, and then make connections with hosts on network B. I'm using tun/routing for all of the VPN connections. The "chain" looks something like this: [Client] --- [Server A] --- [Server A] --- [Server B] --- [Server B] --- [Host B] (tun0) (tun0) (tun1) (tun0) (eth0) (eth0) The whole idea is that server A should route traffic destined to network B through the "site-to-site" VPN set up on tun1 when a client from tun0 tries to connect. I did this simply by setting up two connection profiles on server A. One profile is a standard server config running on tun0, defining a virtual client network, IP address pool, pushing routes, etc. The other is a client connection to Server B running on tun1. With ip_forwarding enabled, I then simply added a "push route" to the clients advertising a route to network B. On server A, this seems to work when I look at tcpdump output. If I connect as a client, and then ping a host on network B, I can see the traffic getting passed from tun0 to tun1 on Server A: tcpdump -nSi tun1 icmp The weird thing is that I don't see Server B receiving that traffic through the tunnel. It's as if Server A is sending it through the site-to-site connection like it should, but server B is completely ignoring it. When I look for the traffic on Server B, it simply isn't there. A ping from Server A -- Host B works fine. But a ping from a client connected to Server A to host B does not. I'm wondering if Server B is ignoring the traffic because the source IP does not match the client IP pool that it hands out to clients? Does anyone know if I need to do something on Server B in order for it to see the traffic? This is a complicated problem to explain, so thanks if you stuck with me this far.

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • Is there a way to replicate a very large file shares in real-time?

    - by fsckin
    I have an hourly cron job that copies about 40GB of data from a source folder into a new folder with the hour appended on the end. When it's done, the job prunes anything older than 24 hours. This data changes very often during work hours and is on a samba file share. Here's how the folder structure looks: \server\Version.1 \server\Version.2 \server\Version.3 ... \server\Version.24 The contents of each new folder compared to the last one usually doesn't change very much, since this is a hourly job. Now you might be thinking that I'm an idiot for setting dreaming this up. Truth is, I just found out. It's actually been used for years and is so incredibly simple, anyone could delete the ENTIRE 40GB share (imagine that dialog spooling up... deleting thousands and thousands of files) and it would actually be faster to restore by moving the latest copy back to the source than it took to delete. Brilliant! Now to top this off, I need to efficiently replicate this 960GB of "mostly similar" data to a remote server over WAN link, with the replication happening as close to real-time as possible -- think hot spare, disaster recovery, etc. My first thought was rsync. Total failure. Rsync sees it sees a deletion of the folder that is 24 hours old and the addition of a new folder with 30GB of data to sync! I also looked at rdiff-backup and unison, they both appear to use similar algorithms and do not keep enough meta-data to do this intelligently. Best thing that I can find "out of the box" to do this is Windows Server "Distributed Filesystem Replication" which uses "Remote Differential Compression" -- After reading the background information on how this works, it actually looks like exactly what I need. Problem: Both servers are running Linux. D'oh! One approach to this I'm looking at is this, say it's 5AM and the cron job finishes: New Version.5 folder arrives at on local server SSH to remote server and copy Version.4 to Version.5 Run rsync on the local server pushing changes to the remote server. Rsync finally knows to do a differential copy between Version.4 and Version.5 Is there a smarter way to replicate Samba shares as close to real-time as possible? Anything out there that does "Remote Differential Compression" on Linux?

    Read the article

  • Nginx Load Balancer 403 error

    - by user64473
    I am trying to install nginx as a load balancer with apache backends, so that when I point my sites to the nginx server it serves up the content from the apache backend. I have the apache configuration set up correctly on both (i.e when I go to the site on the apache servers it works great) but when I use the nginx load balancer as the site I get 403 error. I have no idea why as it isn't even accessing any files on the server, thusly there aren't any files to be forbidden access to. My virtual host is enabled and looks like this: upstream webs { server 10.0.0.30 weight=1; server 10.0.0.31 weight=1; } server { listen 80; server_name www.example.com example.com; access_log /var/log/nginx/access.log; location / { proxy_pass http://webs; include /etc/nginx/proxy.conf; } } and my nginx.conf looks like this: user www-data; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; } Can any geniuses out there tell me what I am doing wrong?

    Read the article

  • NGINX SSL Certificate Not Working

    - by LeSamAdmin
    I've been working on SSL stuff and getting nowhere from like 4 tutorials... I've bought an SSL for pingrglobe.com, and now trying to apply it to my servers. Here's my nginx code: http { server { listen 80; server_name pingrglobe.com; rewrite ^(.*) http://www.pingrglobe.com$1 permanent; } server { listen 443; ssl on; ssl_certificate /etc/nginx/ssl/pingrglobe.crt; ssl_certificate_key /etc/nginx/ssl/pingrglobe.key; #enables SSLv3/TLSv1, but not SSLv2 which is weak and should no longer be used. ssl_protocols SSLv3 TLSv1; #Disables all weak ciphers ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM; server_name www.pingrglobe.com; root /var/www/pingrglobe.com; index index.html index.php; location / { try_files $uri $uri/ @extensionless-php; add_header Access-Control-Allow-Origin *; } rewrite ^/blog/blogpost/(.+)$ /blog/blogpost?post=$1 last; rewrite ^/viewticket/(.+)/(.*)$ /viewticket?tid=$1&$2 last; rewrite ^/vemail/(.+)$ /vemail?eid=$1 last; rewrite ^/serversettings/(.+)$ /serversettings?srvid=$1 last; rewrite ^/notification/(.+)$ /notification?id=$1 last; rewrite ^/viewreport/(.+)$ /viewreport?srvid=$1 last; rewrite ^/removeserver/(.+)$ /removeserver?srvid=$1 last; rewrite ^/staffviewticket/(.+)/(.*)$ /staffviewticket?tid=$1&$2 last; rewrite ^/activate/(.*)/(.*)/(.*)$ /activate?user=$1&code=$2&email=$3 last; rewrite ^/activate2/(.*)/(.*)/(.*)$ /activate2?user=$1&code=$2&email=$3 last; rewrite ^/passwordtoken/(.+)/(.*)/(.*)$ /passwordtoken?user=$1&token=$2&email=$3 last; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location @extensionless-php { rewrite ^(.*)$ $1.php last; } location ~ /\. { deny all; } } } SSL doesn't work as you see here: https://www.pingrglobe.com

    Read the article

  • Why isn't passwordless ssh working?

    - by Nelson
    I have two Ubuntu Server machines sitting at home. One is 192.168.1.15 (we'll call this 15), and the other is 192.168.1.25 (we'll call this 25). For some reason, when I want to setup passwordless login from 15 to 25, it works like a champ. When I repeat the steps on 25, so that 25 can login without a password on 15, no dice. I have checked both sshd_config files. Both have: RSAAuthentication yes PubkeyAuthentication yes I have checked permissions on both servers: drwx------ 2 bion2 bion2 4096 Dec 4 12:51 .ssh -rw------- 1 bion2 bion2 398 Dec 4 13:10 authorized_keys On 25. drwx------ 2 shimdidly shimdidly 4096 Dec 4 19:15 .ssh -rw------- 1 shimdidly shimdidly 1018 Dec 4 18:54 authorized_keys On 15. I just don't understand when things would work one way and not the other. I know it's probably something obvious just staring me in the face, but for the life of me, I can't figure out what is going on. Here's what ssh -v says when I try to ssh from 25 to 15: ssh -v -p 51337 192.168.1.15 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 192.168.1.15 [192.168.1.15] port 51337. debug1: Connection established. debug1: identity file /home/shimdidly/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/shimdidly/.ssh/id_rsa-cert type -1 debug1: identity file /home/shimdidly/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: identity file /home/shimdidly/.ssh/id_dsa-cert type -1 debug1: identity file /home/shimdidly/.ssh/id_ecdsa type -1 debug1: identity file /home/shimdidly/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA 54:5c:60:80:74:ab:ab:31:36:a1:d3:9b:db:31:2a:ee debug1: Host '[192.168.1.15]:51337' is known and matches the ECDSA host key. debug1: Found key in /home/shimdidly/.ssh/known_hosts:2 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/shimdidly/.ssh/id_rsa debug1: Authentications that can continue: publickey,password debug1: Offering DSA public key: /home/shimdidly/.ssh/id_dsa debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/shimdidly/.ssh/id_ecdsa debug1: Next authentication method: password

    Read the article

  • Asterisk server firewall script allows 2-way audio from incoming calls, but not on outgoing?

    - by cappie
    I'm running an Asterisk PBX on a virtual machine directly connected to the Internet and I really want to prevent script kiddies, l33t h4x0rz and actual hackers access to my server. The basic way I protect my calling-bill now is by using 32 character passwords, but I would much rather have a way to protect The firewall script I'm currently using is stated below, however, without the established connection firewall rule (mentioned rule #1), I cannot receive incoming audio from the target during outgoing calls: #!/bin/bash # first, clean up! iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X iptables -P INPUT ACCEPT iptables -P FORWARD DROP # we're not a router iptables -P OUTPUT ACCEPT # don't allow invalid connections iptables -A INPUT -m state --state INVALID -j DROP # always allow connections that are already set up (MENTIONED RULE #1) iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # always accept ICMP iptables -A INPUT -p icmp -j ACCEPT # always accept traffic on these ports #iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT # always allow DNS traffic iptables -A INPUT -p udp --sport 53 -j ACCEPT iptables -A OUTPUT -p udp --dport 53 -j ACCEPT # allow return traffic to the PBX iptables -A INPUT -p udp -m udp --dport 50000:65536 -j ACCEPT iptables -A INPUT -p udp -m udp --dport 10000:20000 -j ACCEPT iptables -A INPUT -p udp --destination-port 5060:5061 -j ACCEPT iptables -A INPUT -p tcp --destination-port 5060:5061 -j ACCEPT iptables -A INPUT -m multiport -p udp --dports 10000:20000 iptables -A INPUT -m multiport -p tcp --dports 10000:20000 # IP addresses of the office iptables -A INPUT -s 95.XXX.XXX.XXX/32 -j ACCEPT # accept everything from the trunk IP's iptables -A INPUT -s 195.XXX.XXX.XXX/32 -j ACCEPT iptables -A INPUT -s 195.XXX.XXX.XXX/32 -j ACCEPT # accept everything on localhost iptables -A INPUT -i lo -j ACCEPT # accept all outgoing traffic iptables -A OUTPUT -j ACCEPT # DROP everything else #iptables -A INPUT -j DROP I would like to know what firewall rule I'm missing for this all to work.. There is so little documentation on which ports (incoming and outgoing) asterisk actually needs.. (return ports included). Are there any firewall/iptables specialists here that see major problems with this firewall script? It's so frustrating not being able to find a simple firewall solution that enabled me to have a PBX running somewhere on the Internet which is firewalled in such a way that it can ONLY allows connections from and to the office, the DNS servers and the trunk(s) (and only support SSH (port 22) and ICMP traffic for the outside world). Hopefully, using this question, we can solve this problem once and for all.

    Read the article

  • Need a script/batch/program that runs a command that won't be killed when the parent is killed

    - by billc.cn
    The scenario I use Zabbix to monitor my servers and recently I wanted to add some more metrics for the Windows ones. For security reasons, I used Zabbix's User Parameter feature, but it limits the execution of external commands to about 3 seconds. After that, the command is forcibly killed. I want to run some long run commands, so I used the trick from Zabbix's forum: run the command in the background, write the results to a file and use Zabbix to collect them. This is rather easy under *nix thanks to the "&" operator, but there is no such support in Windows' shell. To make things worse, when Zabbix kills forcibly kill the cmd.exe it used to evaluate the commands, all child processes die including the unfinished background tasks. Thus I need something that can sever all the ties with its children so they won't be affected in the cascading kill. What I've tried start and start /B - They do nothing as the child always die with the parent WScript.Shell.Run as in invis.vbs from StackOverflow - Sometimes work. If the wscript process is forcibly killed as opposed to quitting on its own, the children will die as well. hstart - similar results to invis.vbs At command - This requires you to set an absolution time for the task to run as opposed to an offset, so the code would be quite messy due to the limited shell scripting capability of Windows. (Edit) PsExec.exe from the SysInternals suite - It uses a service to launch the command, so it is not affected by the kill; however, it prints some banner and log info to StdErr and there's no switch to disable this. When I use 2>NUL to redirect them, Zabbix reports an error. After trying the above in different combinations, I noticed if I call hstart from invis.vbs, the command started by the former will be left alone as a parent-less process when invis.vbs is killed. However, since I need to redirect the output, the command I want to run is always in the form of cmd.exe /c ""command" "args"" >log. The vbs also removes all the quotes, so I have to encode the command with self-defined escape sequences. The end result involves about five levels of escaping/quoting, which is almost impossible to maintain. Anyone know any better solutions? Some requirements Any bat/vbs/js/Win32 binary is acceptable Better not require multiple levels of escaping No .Net (including PowerShell) because it is not installed

    Read the article

  • How to make Nginx fire 504 immediately is server is not available?

    - by Georgiy Ivankin
    I have Nginx set up as a load balancer with cookie-based stickiness. The logic is: If the cookie is NOT there, use round-robbing to choose a server from cluster. If the cookie is there, go to the server that is associated with the cookie value. Server is then responsible for setting the cookie. What I want to add is this: If the cookie is there, but server is down, fallback to round-robbing step to choose next available server. So actually I have load balancing and want to add failover support on top of it. I have managed to do that with the help of error_page directive, but it doesn't work as I expected it to. The problem: 504 (and the fallback associated with it) fires only after 30s timeout even if the server is not physically available. So what I want Nginx to do is fire a 504 (or any other error, doesn't matter) immediately (I suppose this means: when TCP connection fails). This is the behavior we can see in browsers: if we go directly to server when it is down, browser immediately tells us that it can't connect. Moreover, Nginx seems to be doing this for 502 error: if I intentionally misconfigure my servers, Nginx fires 502 immediately. Configuration (stripped down to basics): http { upstream my_cluster { server 192.168.73.210:1337; server 192.168.73.210:1338; } map $cookie_myCookie $http_sticky_backend { default 0; value1 192.168.73.210:1337; value2 192.168.73.210:1338; } server { listen 8080; location @fallback { proxy_pass http://my_cluster; } location / { error_page 504 = @fallback; # Create a map of choices # see https://gist.github.com/jrom/1760790 set $test HTTP; if ($http_sticky_backend) { set $test "${test}-STICKY"; } if ($test = HTTP-STICKY) { proxy_pass http://$http_sticky_backend$uri?$args; break; } if ($test = HTTP) { proxy_pass http://my_cluster; break; } return 500 "Misconfiguration"; } } } Disclaimer: I am pretty far from systems administration of any kind, so there may be some basics that I miss here. EDIT: I'm interested in solution with standard free version of Nginx, not Nginx Plus. Thanks.

    Read the article

  • How can I minimize the amount my router slows down my Internet connection speed?

    - by Lord Torgamus
    Background I'm working with what I assume is a pretty common Internet setup: a cable modem, a wireless router and a few Internet-connected devices. Lately, I've started being more demanding on my Internet connection, and noticed that using my router slows down my download speeds considerably. I just kind of dealt with it until Zune Marketplace on the Xbox 360 told me that a movie was going to take well over ten hours to download, and I just didn't want to wait that long. Good little scientist that I am, I tried to reduce the problem down to one variable. The test As a control, I turned off all the devices in the house that use wireless Internet, and unplugged all the wired devices except for the Xbox. I also power-cycled both the modem and the router. I then tried to download the movie again, and was told that it would still take over ten hours. Next, I unplugged the router, and connected the Xbox directly to the modem. The movie downloaded in just over one hour. As far as I can tell, this means that my ISP, other cable users near me, the remote servers, anything wireless-related and my machines' disk speeds can't be at fault. A similar experiment that replaced the Xbox with a wired laptop produced similar results. To me, this says "the router is responsible for things taking around ten times longer to download." My question I'd still prefer to use the router for a few reasons: it's a pain to connect and disconnect everything every time there's a big file to download direct connection to the modem isn't good for security only one machine can be connected directly to the modem at a time What can I do to have fast connection speeds while still using the router? I don't mind turning other machines off, as long as I don't have to mess with power and ethernet cables. EDIT : After asking this followup question and then this one, I installed dd-wrt on my router, and I seem to be getting higher and more consistent speeds. Perhaps more importantly, my memory use is fairly constant. I know this isn't an answer — which is why I'm not posting it as an answer — but it is how I resolved the situation, and hopefully it'll be helpful for someone.

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >