Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 14/389 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • How do you set up redundant servers?

    - by user59240
    To the sysadmins out there, I'm trying to get an idea about how you go about maintaining redundant servers for small projects. The modest number of servers in my mind is two, and three main essential services come to mind: HTTP, mail and DNS. How do you automate this duplicity? Is rsync the tool of choice (again, for small projects)? In addition to common tools for these tasks, references to books and articles would be greatly appreciated. The more hands-on the approach, the better. Thanks!

    Read the article

  • Multiple Servers with identical services

    - by Jerry Bailey
    I have a dozen servers in different locations all running the same web service application but each going against their own SQL Server DB. I am writing a desktop application that consumes the web services. I want to present the user with a drop down of all servers in the network that are running the same wweb service application. Do I have to add a ServiceReference for each of the servers running the web service app and thereby having as many proxies as there are servers? Or can a define a single instance of the services and dynamically build a list of endpoints to select from a drop down?

    Read the article

  • infiniband network between 3 servers

    - by grumpf
    Let's say I have 3 different servers, each one with an infiniband card. Each card has 2 different ports. (I don't know about the model yet) Is it possible to create 3 different networks and to allow the 3 servers to communicate with each other without any problems? (and any spof). I guess I just have to setup the /etc/hosts correctly. I really don't know about infiniband, so please help me :) Thanks in advance. EDIT: Point is to NOT USE a switch!

    Read the article

  • Linux servers seeing bad download performance behind Sonicwall firewall

    - by Joshua Penix
    I'm working with a pair of co-located CentOS Linux servers sitting behind a Sonicwall PRO 2040 Enhanced firewall running in transparent bridge mode. These servers are having a strange problem downloading files more than a few megabytes in size. For example, if I try to wget or FTP a copy of the Linux kernel from kernel.org, the first ~1-2MB will download at 600+K/s, and then throughput will drop off a cliff to 1K/s. I've reviewed all the firewall configuration settings for anything suspicious, but found nothing. More interestingly, I performed the same download with a Windows server sitting behind the same firewall, and it sailed right through at 600+K/s the whole way. Has anyone seen this? Where should I start looking to troubleshoot this problem?

    Read the article

  • Amazon ELB and use of address / server names across multiple servers

    - by Stpn
    I am setting up Nginx servers behind the ELB. I set up so that api.app.com points to an ELB. I wonder which addresses I should use for remote connections, Nginx settings etc.. 1) For example, in Nginx: Should I do server { listen 80; #What is the right line here: # server_name <WWW.NAME.COM> OR <ec2-.....compute-1.amazonaws.com> OR <MLB-....amazonaws.com>?; passenger_enabled on; ..... } 2) I connect servers behind ELB to remote Postgres database. In Postgres settings should I open the ELB address (MLB-...amazonws.com) or to individual EC2 IPs?

    Read the article

  • Monitoring several remote servers over different VPNs

    - by Ciaran
    I'm a developer with about 20 different clients running our server application. I access each of the clients' servers remotely through VPN to provide support, updates, etc. Is there any tool available that I can set up locally that will connect through each of the VPNs automatically to allow me to monitor? The idea sounds very far fetched to me as the VPN software varies a good bit but maybe someone's had to do something similar before? It's been a few years since I last used Nagios but I think it'd be quite cool to have that set up pointing at each of the remote servers through VPN somehow.

    Read the article

  • Linux servers vs Windows IIS sense of usage "free" solutions

    - by Rob
    I wonder what is the sense of using "free" open source solutions for serious webstie applications? Crawled and read many testing of servers performance and there is one conclusion: IIS seems to be the best choice for high load applicatiom. I mean cost effective. Especially this concers to Nginx PLUS and LiteSpeed Users where subscriptions paid for e.g. LoadBalacer and extra support cost a lot in fact. I'm asking then where it's "free" then or "cheap" in this case? Assuming even little higher cost of dedicated servers with Windows still seems like Windows looks cheaper. At it's basic setup Windows 2012 with IIS offer much more than std LAMP, or other NGINX config.... Maybe am I missing sth ? I mean only general case for someone who did not already started his app. I know exactly that the cheapest solution is the one someone is skilled. Has anyone done already such real costs calculation for example scenarios?

    Read the article

  • Multiple servers using same nameservers

    - by Robsimm
    I have 2x servers with the following OS's and Control Panels: Windows Server 2003 running Plesk 9 Centos 5.3 running WHM/cPanel The Windows server 2003 server is hosting the two nameservers: ns1.domain.com ns2.domain.com I have BIND running on the Centos 5.3 server, but I wish for my customers to use the same nameservers ns1.domain.com and ns2.domain.com (as per the Windows Server 2003 server). My first question is - is this possible? And if so, how would I go about configuring both servers to enable such a configuration? Thanks very much.

    Read the article

  • 27 days after domain transfer name servers not propogated

    - by Thom Seddon
    We recently bought the domain: embarrassingnightclubphotos.com 7 days after accepting the transfer the domain finally transferred to our registrar and we immediately changed the name servers from ns*.netregistry.net to amy.ns.cloudflare.com and cody.ns.cloudflare.com 20 days after changing the name servers, the majority of tests show that both old and new nameservers are still being reported: http://intodns.com/embarrassingnightclubphotos.com http://www.whatsmydns.net/#NS/embarrassingnightclubphotos.com We are now ready to launch the new site but this issue is plagueing us as a high proportion of the traffic is still receiving the old nameserves and so hitting the old server. You can tell if you have hit the old or new server as the old server has the value "A" for the meta tag "Location" and the new server has "U". (The old server just has an iframe too!) I have never had this problem before - who is causing this and how should we go about reaching a resolution? Thanks

    Read the article

  • Mysql Servers for Attendance System

    - by foo
    I'm building an attendance system. There are about 20 places where people will check in and check out using Mifare 1K Card. It will use MySQL as the database. The system will display something like "#ID IN: 800AM" when the first time the user checks in and "#ID OUT: 400PM" when the user checks out. For this to work, all the databases need to be synchronized with each other all the times. For an example, if user A went to location #1 to check in but by the time he wants to return home, the server at location #1 went down, he needs to go to location #2 or the nearest server to check out. The server at location #2 should display '#ID OUT: 400PM" and not "#ID IN: 400PM" since he's already checked in. So, what should I use to ensure this idea will work? My main concern is with the network (other department manages it) which is very unpredictable. It just love to go down anytime it wants to. Update LOL, didn't realize my question is not clear, just noticed it when you guys pointed it out, sorry about that. My real question is, how can I configure my MySQL to be synchronized with each other (20 servers)? MySQL cluster ? (tried reading about it, but I'm not sure if it's the right thing to do) My current setup (first phase): Local database for each server OS: Slackware A main server that keeps track which staff is at which server A web based front end for the user to see their history (which connects to the server based on their records) Main Pros No worries about network problems since it is a local database Main Cons A user can only check in and out at the same server. Databases/Servers are not connected with each others. Have to add the user to each server if the users want to check in at different locations. Which means, if he wants to go to location A, he must be checked out from location A first and then check in at location B. The server at location B didn't know that the user has checked in before at A. By the way, I've already centralized my NTP to a local server. About the network, let's just say, I don't have the authority to make changes so that the network will be better. The network won't effect all 20 servers at once, usually, just a few of them for several times a week. If there are anything else you would like me to answer, please just ask.

    Read the article

  • ARR servers in the Load Balancing pool automatically go from unavailable to available

    - by Chris
    I have 3 IIS web servers in an ARR web farm. When we do rolling releases, we take one server offline as a backup server and move it into an "Unavailable State" I have noticed that with ARR, servers will not stay in this state...they come back online automatically hours or days later. Does anyone know how to remedy this situation? This is very bad as the server that is down is typically not running the correct version of our code. I need to keep a server unavailable until i tell it otherwise.

    Read the article

  • Allow different headers on different servers using WFF

    - by Brian
    We've got multiple web servers configured in a cluster using Microsoft's Web Farm Framework. One of the things I like to do to help debugging is to create a header in IIS that identifies the server that handled the request. Unfortunately when I try to do this, WFF sets the headers to the same value on all the servers. Is there a way around this? I tried looking into using skipDirectives, but I can't find any documentation on it (other than a little bit showing how to use it to skip directories and bindings). If there is documentation on this, please link to it! I would like to be able to read up more on it in case I need to do other things as well.

    Read the article

  • Laptop is Switching DNS Servers

    - by Steffan Harris
    Ok sometime ago I changed my ip address to a static one because I was bored and I wanted to learn more about static ips. I am running windows xp. My laptop works find on the network that i set up a static ip address, but when i go to another network, the incorrect dns servers are being used. When I enter the option to get a dns server automatically, the internet connection works but only for a short time. After that the dns servers resets to the one i entered manually on a previous network. I did this by going to Network Connection-Right Click Local Area Connection and go to properties-Select TCP/IP - then click the Properties button. At this point i am given the option to Obtain an ip address or obtain and address automatically. My question is how do I resolve this problem of the dns server reseting to the previous one.

    Read the article

  • Wordpress Installation on Two Servers - Loadbalancing

    - by rihatum
    Hi All, I have to install wordpress (One Blog, one domain, for e.g. mycompany.com/blog) on two servers sharing one database on a different server, these two servers are behind a loadbalancer and the db would be on another server. We are planning this way due to high traffic. I have done standalone wordpress installations on a single server, on windows 2003, 2008 with IIS6, 7 etc I am just researching as to how would I implement this. What would be the steps to achieve this and upon searching I saw some posts regarding the wp-content/uploads directory to be synced at regular intervals ? your help much appreciated Thanks for reading

    Read the article

  • scp to remote servers stalls, unable to isolate cause

    - by Rolf
    When I copy a large file (100+mb) to a remote server using scp it slows down from 2.7 mb/s to 100 kb/s and downward and then stalls. The problem is that I can't seem to isolate the problem. I've tried 2 different remote servers, using 2 local machines (1 osx, 1 windows/cygwin), using 2 different networks/isps and 2 different scp clients. All combinations give the problem except when I copy between the two remote servers (scp). Using wireshark I could not detect any traffic volume that would congest the network (although about 7 packets/sec with NBNS requests from the osx machine). What in the world could be going on? Given the combinations I've used there doesn't seem to be any overlap in the thing that could be causing the trouble.

    Read the article

  • Have two exchange servers to communicate together

    - by Data-Base
    We have Exchange Server 2007 using our domain ddd.com. We created an isolated network with a firewall/gateway and installed a DC and Exchange Server 2010 using a demo/test domain (ddd.loc). We opened all the needed ports in the firewall (10.10.2.88) to the Exchange Server 2010. In our main Domain Controller (10.10.2.3) we defined the domain ddd.loc with IP 10.10.2.88 (firewalls). We also we defined MX records to the same IP (10.10.2.88) We did that so when we send email from my email [email protected] it will go to the Exchange Server 2010. Anyways, all the pings test from to any servers are OK. But we are not able to send or receive emails. Between these Exchange Servers we can not send any email from the 2010 to any email in general (emails are pending). Also, in Exchange 2007 we are getting error #550 5.1.1 RESOLVER.ADR.RecipNotFound; not found ##

    Read the article

  • Time difference between servers after disaster recovery

    - by Sandokan
    We are running an old training system based on Windows Server 2003 and XP-clients. The solution is rather simple with four servers, two of them beeing DC:s. Everything is preconfigured and that goes for backup scheme as well. The backup software is Symantec BackupExec 2010. The scheme is a standard GF-F-S routine with full backups running once a week on Sundays. The other six days run differential backups. Now let's say in a worst case scenario, a server crashes on Saturday and we have to restore it from backup. The last backup will then be six days old and thus it will come online with six days old configurations. Will this pose a problem for the other servers or will the recovered server "get in line" eventually?

    Read the article

  • How to Zone Forward to a List of Alternative Name Servers in pfSense 2.0.1

    - by Bob B.
    I'm not sure if dnsmasq is involved in this process on pfSense or not. Before pfsense, we'd do this in BIND thusly: zone "firstpartner.com" { type forward; forwarders { 1.2.3.4; 5.6.7.8; w.x.y.z; }; I'm intentionally over-explaining this in the interests of specificity: We currently use dnsmasq to direct local queries for our primarydomain.com. Anything that doesn't match a host override entry in pfSense gets passed off to our external name servers, as defined elsewhere in pfSense. There are certain other zones which are not publicly accessible, let's call them firstpartner.com and secondpartner.com that each have various subdomains that their own name servers handle. I need a way to define a list of name server IPs for each domain zone (see BIND example above). Thanks in advance for any help you can provide.

    Read the article

  • Different Servers for incoming mails

    - by André
    Hi everybody, not sure if what I want is possible so I´d appreciate any pointers. I have full control over the infrastructure (DNS and servers) Currently I receive mails for domain.tld. MX record for domain.tld is gw.domain.tld. gw then does some spam and virus checking and forwards the mails to the internal exchange server. GW is a Proxmox Mail Gateway Box (Free license) Now what I want is to distribute mails for different recipients to other mail servers. Basicly I only want [email protected] and [email protected] to go to the exchange as before, but all others go to a different mail server (based on linux). Any idea how I could achieve this?

    Read the article

  • Multiple SMTP servers in Thunderbird3

    - by ldigas
    Situation: 2 mail accounts - each with its own pop3 and smtp servers, accesed normally. Except, when using Vodafone mobile network (you know, those usb or pcmcia cards ...) in which case mails are send using Vodafone's SMTP server. I configured both accounts in thunderbird for their default servers, and then added under multiple identities, another called Name of User (mobile) for each account. And it works. Except, I don't like the fact that when I send mail using mobile smtp server it sends them under Name of User (mobile) <-- the mobile part being important. I could of course, delete that part, but then when sending mail, I'd have no way of differenting between the two. They'd both look as Name of User email@address_of_user.com. So, what would be the easiest way to solve this ? It is not a major problem, but it is annoying.

    Read the article

  • Windows 2008 RC2 IIS6 SMTP Virtual Servers - Limited to 4

    - by webnoob
    In line with this post: http://www.hugheserblog.com/2012/05/22/error-creating-iis-smtp-virtual-servers/ I am receiving the same issues: When we tried to add more than 4 IIS SMTP virtual servers, we got the error within IIS, “The system cannot find the path specified.” This post is almost 2 years old and my server is up to date with Windows updates so I assumed it would be fixed already. Does anyone know if I need to do something special (ie. contact Microsoft) to get a special fix for this? The information in the post suggests it should have been included in an update.

    Read the article

  • Auto-scaling EC2 Servers and Updating Code

    - by jstats
    We've come to the point where we need to set up autoscaling for our web server and I'm unsure how to go about the process of scaling servers and updating the the existing code without remaking a new AMI and changing the autoscale config to use it. I've read a bit about people bundling the new code and uploading it to s3 and having new servers grab the bundle on boot up but that doesn't seem all that pleasant either. Currently the web app's files live in a git repo, and when we update the code, we push it to github, ssh into the web app and run a hook to bring down the latest code. So I was thinking that another option could be to just run that hook on an hourly or daily cron task. Unfortunately that doesn't cover everything type of update (for example new blog posts' images and such which aren't included in the git repo) but it's something. Could anyone provide some advice on what a common solution is or anything as to why my proposed solution is a bad idea? Thanks all

    Read the article

  • PostgreSQL 9.0 HA load balancing between servers

    - by Vijay Ramachandran
    Hey folks, I'm bashing my head to configure load balancing stuff between two database servers. I have no clue whether, I can find any mechanism to implement this. I already tried to implement Heart beat clustering but it requires virtual Ip wherein I can't create virtual IP or assign my own IP address in amazon EC2. Is there a way to configure PostgreSQL database servers in similar to Amazon load balancing kind of thing ? If so, please suggest the solution. Thanks in advance.

    Read the article

  • Transporting servers - need special rack/case

    - by Nso
    I am responsible for our companys server infrastructure at trade shows. We have 2 annual shows, 1 in Las Vegas and 1 in Amsterdam, so obviously our servers do quite a bit of travelling. Quite often, it gets home with pieces falling off, and insurance/rebuilding takes ages and cost a lot of money. For now I have been using a wooden rack-box, with steel-reinforced sides/corners, but I am looking for something tougher. Does anyone have experience with sending servers all around the world, without them dieing all the time?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >