Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 43/537 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Setting up multiple servers for one domain

    - by Joseph Torraca
    So I am starting up a new website and I was wondering how to set up 5 servers to host the site. I have already purchased 5 Apple XServes, one will be used as a test server and the other 4 will be for the live site. So I have read some website on the internet and they all reference using one server and installing software onto it and have that server do the load balancing. I have also read that you could use a hardware, rack-mounted system and plug the servers into that. The load balancer would then distribute the load. So I have a few questions about each: 1) How do you set up the software version and have the other servers as "slaves" and have one "master" to direct traffic? 2) Which of the two options above are more reliable, and better suited for a startup that doesn't have many users per month, yet(hopefully)? 3) Is there a theoretical max limit of servers that can be connected to a software load balancing system? Note: Obviously this will change from software to software, but in terms of the server being able to handle it? 4) In your own opinion, what are you using for your sites? Have you had any problems setting up that system or operating it once its running? Are there any things you would stay away from if you had to start over? 5) I also purchased a Apple RAID system, so if you are familiar with it, is there any way to connect it to multiple Xserves so they all serve the same data? I'm a little confused on this, so thanks for all your help and being patient with me. Note: Take it easy on me, I am learning this as I go along, so I may have used terms incorrectly or explained things that don't really make sense. Sorry. P.S. If you need me to supply the specs on the servers to determine which system makes the most sense, I can post them for you.

    Read the article

  • Splitting Servers into Two Groups

    - by Matt Hanson
    At our organization, we're looking at implementing some sort of informal internal policy for server maintenance. What we're looking at doing is completing maintenance on our entire server pool every two months; each month we'll do half of the servers. What I'm trying to figure out is some way to split the servers into the two groups. Our naming convention isn't much to be desired (but getting better) so by name or number doesn't really work. I can easily take a list of all the servers and split them in two, but with new servers are being added constantly, and old ones retired, that list would be a headache to maintain. I'd like to look at any given server and know if it should have its maintenance done this month or next. For example, it would be nice to look at the serial number. If it started with an even number, then it gets maintenance done on even months and vice-versa. This example won't work though as a little over half of the servers are virtual. Any ideas?

    Read the article

  • Slow transfer speed between two servers

    - by Linux Guy
    I have two servers both network cards speed is 10Gbps The inbound bandwidth between two servers is 10Gbps , the outbound bandwidth internet bandwidth is 500Mpbs Both servers using public ip addresses in public and private network Both servers transfer and connection on nginx port , and the server B used for streaming media , like youtube stream videos I check the transfer speed using iperf utility From Server A to Server B # iperf -c 0.0.0.1 -p 8777 ------------------------------------------------------------ Client connecting to 0.0.0.1, TCP port 8777 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 3] local 0.0.0.0 port 38895 connected with 0.0.0.1 port 8777 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.8 sec 528 KBytes 399 Kbits/sec My Current Connections in Server B # netstat -an|grep ":8777"|awk '/tcp/ {print $6}'|sort -nr| uniq -c 2072 TIME_WAIT 28 SYN_RECV 1 LISTEN 189 LAST_ACK 139 FIN_WAIT2 373 FIN_WAIT1 3381 ESTABLISHED 34 CLOSING Server A Network Card Information Settings for eth0: Supported ports: [ TP ] Supported link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: external Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes Server B Network Card Information Settings for eth2: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Direct Attach Copper PHYAD: 0 Transceiver: external Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes The problem is : as you can see from iperf utility, the transfer speed from server A to server B slow when i restart network service the connection will be ok , after 2 minutes , it's getting slow How could i troubleshoot slow speed issue and fix it in server B ? Notice : if there any other commands i should execute in servers for more information, so it might help resolve the problem , let me know in comments

    Read the article

  • Loadbalance UDP traffic with session affinity and way to take servers in & out of rotation

    - by William
    What is the best way to go about load balancing UDP traffic among a whole bunch of servers, while keeping session affinity based on the users' IP? I need to also be able to take servers in and out of rotation for new clients, so when they join for the first time, they get put on a server in a list of available servers, and clients already connected would stay connected to their specific server. I have written the software to maintain a list, but I can't seem to find anything that would perform this functionality. If you need the context, this is to facilitate game tournaments for Minecraft: Pocket Edition, which is done with UDP traffic, I cannot change the protocol. And, because tournaments open and close, I need to be able to place players on their proper servers. Performance is also a priority, I have a program to do this but it is very bloated and slow. Thanks for any help! William

    Read the article

  • active directory servers synchronization

    - by Mit Naik
    I have 3 AD servers with windows server 2008 R2 at 3 different places, main server is at datacenter and 2 are in our local office which are at 2 different place. I want to synchornize all the 3 server together, were datacenter server should be central server and rest 2 servers should synch with the datacenter server. Please provide us the steps or tutorial to do this. Also we want that once the changes are done in 1 of the AD server the changes are automatically done in all the servers. For example if I change the password of user in our local server it should be updated in our main AD server and other branch server too. Please provide us the steps or tutorial to do this asap. I have one more question I have already created main datacenter AD as domain.local and other domains as xyz.local and abc.local, how can I replicate the additional AD domains with main datacenter DC, also do we require VPN connection, is there any other way to replicate the servers without using VPN connection?

    Read the article

  • How to maintain one file across many production servers (Windows and Linux)

    - by Brien
    My organization wants to centrally manage an Oracle TNSnames file for all of their production servers. When that file changes, they want to be able to push out the changes to all servers that use it with a minimal effort. Approaches that have been considered: Centralized file server (drawback: if the file server or the network connection to the file server goes down, the servers have no access to the critical file) Subversion client on each server (drawback: using a source control tool in production, added complexity) Store an individual copy of the file on each server (drawback: changing the file contents involves making changes on many different servers) Update Can I use DFS to do this?

    Read the article

  • Can't get basic web servers working on EC2 RedHat

    - by Yarin
    I'm trying to get some basic Python web servers (Flask, Tornado) turned up on the EC2. On the Amazon-flavored Linux AMI (Amazon Linux AMI 2013.03.1) they work no problem, but the same web servers installed on the RedHat quicklaunch AMI (Red Hat Enterprise Linux 6.4) don't work at all- All I get is connection failure errors when I try to browse to them. Both these servers share the same security group, with the relevant ports (5000, 5010) open, so I'm trying to understand why RedHat would not be not working.

    Read the article

  • How to have your DNS servers forward queries for internet names

    - by Xavier Hutchinson
    I have 2 Domain Controllers / DNS servers on Windows 2012, their IPs are 10.0.1.10 and 10.0.1.11 Another server acts as the DHCP server for clients, and sets their primary and secondary DNS to the IP addresses of the previously mentioned domain controllers / DNS servers. However I cannot resolve internet domain names, presumably as they are not hosted on the DNS servers. So my question is what do I have to do on my setup to resolve external domains? Thank you! Xavier.

    Read the article

  • Hot swapping for Linux web/database servers

    - by Art
    Is there a way to perform the following under Linux: There are two web servers, main and backup There are two database servers (postgres), main and backup Web Servers are in sync with each other, ie. configuration/content/applications are the same Backup database is continuously synced up with main database. If either of main servers goes down, it's being replaced with backup one on the fly. When main database server goes back up, all the data from backup server is uploaded to it. Essentially, I need the hot swapping working automatically with no or minimal user intervention, if possible. Recovery procedure is preferably automatic but can include some manual steps.

    Read the article

  • Time Drift on VM servers, need a reliable solution

    - by zeroasterisk
    We have some windows server 2008 VMware instances on multiple physical servers (hosted) and an application which requires the time to be synced across the server instances. Obviously, VMware has problems with this and we really have never gotten it working any better, we have setup the servers to poll for an NTP update every minute which mitigates the problem (in a fairly crude way). Except that every once in a while, the update will fail (because there's already too much drift) and then windows never does an NTP update afterwards which eventually allows the servers to drift far enough apart that our application breaks, and we notice. We are thinking about changing hosts to Xen servers on approximately the same setup, and I anticipate similar problems. can anyone tell me if Xen has the same time-drift issues VMware does, for guests? can anyone tell me what the best windows server settings are for syncing with an external NTP server to keep things in sync: how frequently do you recommend syncing? (assuming every minute) do you recommend running our own NTP server - even if it has to be on a virtual instance? (assuming not) is there any way to tell windows to sync with the NTP server no matter what the time difference is? any other suggestions for keeping windows servers time in sync? I have become familiar with [ http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1318 ] and it's helped, but it's not been totally effective (see above). thanks much!

    Read the article

  • Best way to replicate servers

    - by Matthew
    I currently have two servers both with linux software RAID1 configurations. They use heartbeat and DRBD to create a shared DRBD device that hosts a a exported NFS directory. The servers run Ubuntu Server with a LXDE GUI and some IP These servers are going to be placed on fishing vessels to act has redundant storage for IP cameras. My boss wants me to figure out the most efficient way to create these servers. We might be looking at pushing out several systems a week. Each configuration will be almost identical besides IP addressing. What would be the best method to automate the configuration process? We are trying to cut down on labor costs to set these up. Imaging and Proceeding are both on my mind right now

    Read the article

  • Why don't DNS root servers answer?

    - by JustTrying
    If I try to query a root server with dig, I never receive an answer. For example the output for dig @b.root-servers.net www.ubuntu.com is ; << DiG 9.8.1-P1 << @b.root-servers.net www.ubuntu.com ; (1 server found) ;; global options: +cmd ;; connection timed out; no servers could be reached But if I query other servers (the one of my ISP, or 8.8.8.8), they answer correctly. Why?

    Read the article

  • How DNS server resolves when web servers are geographically distributed

    - by Supratik
    Hi A domain abc.com has two web servers located in two different location one in India and another in Malaysia. If the request are handled by the servers depending on the location from where the request originates then how DNS server resolves for such geographically distributed servers when my client system is configured to a local DNS server in Indian or a DNS server in Malyasia ? Warm Regards Supratik

    Read the article

  • Putting servers inside a refrigerator? [closed]

    - by Muhammad Jamal Shaikh
    It could be a silly question, but I decided to go for it. I shall be buying 3 servers in the next few weeks to set up a small webfarm at my home. I am told by different people who work in server rooms, that I should keep my servers in an air conditioned room. Which is really expensive, because the temperature here in south asia is between 10 to 50 degrees C. Here comes the funny part: I have an extra fridge in my home, why shouldn't I put the servers inside that fridge? Benefits: I don't have to buy the air conditioner. I don't have to buy the rack mount for the servers. The electricity consumed by the fridge is much much less than AC. Give me your suggestions!

    Read the article

  • putting servers inside a fridge! [closed]

    - by Muhammad Jamal Shaikh
    hi , i think its a silly question , but i decided to go for it. i shall be buying 3 servers in next few weeks for setup a small webfarm at my home. i am told by different people who work in server rooms , that i should keep my servers in a Air Conditioned room. which is really expensive.because temperature in south asia is b/w 10--50(Centigrade). here comes the funny part, i have an extra fridge in my home , why shouldn't i put the servers inside that fridge. here are benefits listed i dont have 2 buy the air Conditioner i dont have to buy the rack mount for the servers the electricity consumed by the fridge in much much lessor as compared to an AC be free to give your suggestions :) thanks Jamal.

    Read the article

  • Mesh Networked servers via vpn

    - by microspino
    I got a design idea and I would like to have some advice from SF about It. I have 5 customers with small real-estate databases. I've built for them a desktop app and now they would like to merge their database to share their data. I don't want to centralize everything in one place nor I want to do maintenance for servers. They told me also, that all of them in their offices, have little servers and maintenance guys available. Although everything seems suitable for web application, I had the idea to experiment something new: Any customer small-server wild be connected to the others in a sort of mesh network without a single point of failure and through VPNs. If one of the servers went down the customers could still connect to their databases from one of the other mesh networked servers instead of from the local one that is down. During normal operations all the servers sync the db with the others through VPNs. I can accept a half-day timing window of NON synched data, in other words, since I don't need real time synchronization, the server don't have to always stay in synch. I can migrate my data over to other Non-Sql technologies like CouchDB or Redis or whatever you suggest. As you can see I don't have a lot of constraints and although I could go with a web application I would like to delegate and decentralize support, data-privacy and management, as more as I can to my customers offices. Is that a crazy idea? Do you know If something similar exist? Which technology would you suggest?

    Read the article

  • Connecting two servers together - How to?

    - by Chris
    Is it possible to connect two servers running for example Windows Server 2003/2008 together. For example they are seen on the network as one server with the combination of all HDD from each server? Example: \\Server1 - 1 x 1tb hdd \\Server2 - 1 x 1tb hdd I would like users of the network to be able to store their documents on both servers for load balancing. So basically a RAID between the two servers? Any help would be appreciated.

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • Configure nagios to alert only when there are no mx servers available

    - by Aseques
    In my company there are two redundant MX servers, I would like to tell nagios to wake me in the night ONLY if both servers are down. The default behavior is to alert whenever one of the MX servers is down. I would like to set a timeperiod i.e. 23:00 to 06:00 when nagios only alerts me by sms in case both servers are down. I am using nagios3 but I couldn't find something like this in the docs. Solution: I've used this check_command in a service called MXservice: check_command check_service_cluster!"MXservice"!2!1!$SERVICESTATEID:mx1:SMTP$,$SERVICESTATEID:mx1:SMTP$ Thanks for all your help

    Read the article

  • Availability of big files on multiple servers

    - by Imises
    I have to handle many (1'000 - 30'000) big files ranging from 200MB up to 2GB. The demand for these files is variable (0 - 300 downloads / file). This is why a single file must saved on 2 or more servers. My servers are placed in different datacenters (France), with different size HDDs (750GB to 4TB). Currently I share the files using PHP and ncftpget / ncftpput, but it's very slow. I need a solution to handle balancing these files across 7+ servers.

    Read the article

  • 2 servers, high availability and faster response

    - by user17886
    I recently bought a second webserver because I worry about hardware failure of my old server. Now that I have that second server I wish to do a little more then just have one server standby and replicate all day. As long as it's there I might as well get some advantage our of it ! I have a website powered by ubuntu 12.04, nginx, php-fpm, apc, mysql (5.5) and couchdb. Im currently testing configurations where i can achieve failover AND make good use of the extra harware for faster responses / distributed load. The setup I am testing nowinvolves heartbeat for ip failover and two identical servers. Of the two servers only one has a public ip adress. If one server crashes the other server takes over the public ip adress. On an incoming request nginx forwards the request tot php-fpm to either server a of server b (50/50 if both servers are alive). Once the request has been send to php-fpm both servers look at localhost for the mysql server. I use master-master mysql replication for this. The file system is synced with lsyncd. This works pretty well but Im reading it's discouraged by the (mysql) community. Another option I could think of is to use one server as a mysql master and one server as a web/php server. The servers would still sync their filesystem, would still run the same duplicate software (nginx,mysql) but master slave mysql replication could be used. As long as bother servers are alive I could just prefer nginx to listen to ip a and mysql to ip b. If one server is down, the other server could take over the task of the other server, simply by ip switching. But im completely new at this so I would greatly value your expert advice. Is either of the two setups any good ? If you have any thoughts on this please let me know ! PS, virtualisation, hosting on different locations or active/passive setups are not solutions im looking for. I find virtual server either too slow or too expensive. I already have a passive failover on another location. But in case of a crash I found the site was still unreachable for too long due to dns caching.

    Read the article

  • Mirrored servers in data centers nationwide -- how? [closed]

    - by Sysadmin Evstar
    Possible Duplicate: Mirrored servers in data centers nationwide — how? Mirrored servers in data centers nationwide -- how? I flunked my IT interview by getting this question wrong. I thought that in the various metropolitan areas, an "http://google.com" request goes to the ISP's DNS server, which somehow returns an IP address for one of several geographically-nearby http servers, and then something internally rolls over to the next available local Google server. But then, I could not explain where the table of available local Google servers is actually cached, or the details of the IP address rollover. Or how they could manually take some server out of the rotation, from anywhere. So, what should I be reading now so I can ace this question next time? Also, what daemons run on these machines 24/7 to keep all those mirrored database disks synchronized?

    Read the article

  • Increase Performance and Agility with Oracle’s New Data Center Fabric Solutions

    - by Cinzia Mascanzoni
    Join this Webcas on  Tues., December 11, 2012 10 a.m. PT / 1 p.m. ET and hear from S.K. Vinod, Senior Director of Product Management, Oracle Virtual Networking products. He’ll show you how the fast, simple, and agile architecture of Oracle Fabric Interconnect provides dynamic network and storage connectivity to thousands of servers. You will see how to use Oracle Software Defined Network (SDN) to connect any resource on the data center fabric quickly—without incurring downtime or requiring network reconfiguration. With Oracle Virtual Networking products, you can: Streamline your data center connectivity Reduce complexity by 70% Cut infrastructure expenses by up to 50% Increase application performance up to 30x Provision new services and reconfigure resources in minutes  Simplify deployments with wire-once infrastructure  During the Webcast, you’ll also have the opportunity to chat directly with Oracle experts. Visit OPN's Server & Storage Systems Knowledge Zones anytime to learn about partner engagement, training, resources, and replays of other webcasts to jump start business.  You can also email us your questions. Unable to attend live? Register anyway – we'll send you the on-demand link to the Webcast!

    Read the article

  • How can I make feasible the deployment of my application on the servers

    - by aklin81
    I am a Java Web Application Developer. I have an idea for a web application project that I am working on. I personally believe that the app has potential to become a popular website. Currently I am working on it as a developer with two others in the project. The development costs has been almost null uptil now since we are doing in-house development with open source technologies. But the costs are now going to appear as we'll have to host our application online on the servers. Right now I see this as the major expense as we go live. Are there any ways by which we can smartly deal with this hurdle ? We want to minimize the costs as much as possible, or even better, if we can make this null, perhaps, through some partnership agreement with the hosting solutions provider!? Your opinions are highly solicited!! Please enlighten with your experiences and knowledge. Thanks so much, for your time !

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >