Search Results

Search found 9816 results on 393 pages for 'blade servers'.

Page 108/393 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Rsync to take the newest file. And a cron job?

    - by user1704877
    I have a log file on two different servers. The servers are under a load balancer so half the traffic goes to one server, and half the traffic goes to the other server. I need to take the newest log file from one machine and transfer that log file to the other machine. So if one log file is changed on one server, it gets updated on the other server. I think I need to use rsync. And do I also need to put it in a cron job?

    Read the article

  • TFS 2012 or TFS Azure (Preview)

    - by Fore
    We want to migrate our current TFS 2010 solution that's hosted today in one of our own servers to TFS 2012 hosted somewhere else. We don't want to handle the servers any more, and therefor are looking at alternatives. TFS preview / Azure is one alternative, hosted in the cloud, but I'm not that happy with forcing users to use live id, and we don't have an AD. My second thought was to create a Azure virtual mashine, and there install and host TFS 2012. Is there any downsides with this? Compared to the price of bying a VPS this is cheap and feels reliable in Azure? Do you have any other ideas?

    Read the article

  • Postfix cleanup daemon access control

    - by Flimzy
    Is there any way to control which hosts are permitted to connect to the cleanup daemon over TCP? Our 'master.cf' contains: 2526 inet n - - - 0 cleanup This is necessary because we have a cluster of SMTP servers running custom code, and they can all inject mail to the centralized postfix server via the cleanup daemon. However, we want to allow only our authorized servers to connect to the cleanup daemon. The current configuration allows any host to connect to port 2526. Clearly we can use iptables to restrict access, but is there a way to do this within postfix itself?

    Read the article

  • All email directed to 3rd party vendor except for one specific domain. How?

    - by jherlitz
    So we setup a site to site vpn tunnel with another company. We then proceeded to setup a DNS zone on each others dns servers and entered in each others Mail server name and IP, MX record and WWW record. This allowed us to send emails to each others mail servers through the site to site vpn. Now recently the other company started using MX Logic to scan all outbound and incoming mail. So all outbound email is directed to MX Logic. However we still want email between us to travel across the the Site to Site VPN tunnel. How can we specify that to happen for just one domain not to be directed to MX Logic? Stump on both ends and looking for help.

    Read the article

  • Nginx load distribution and multi-domain SSL

    - by Steve Clark
    I'm researching into the best methods of two new parts of our infrastructure, hopefully finding a single solution for both. 1) We're currently running a single application server, and we're going to be adding an additional application server and load balance between the two. 2) We handle a few thousand domains across the application server(s), and we're looking to support SSL. The best method i've come across so far is using nginx for it's Load Distribution to serve the requests to the application servers, and for it's SSL support. If a request is using SSL, nginx accepts the request on, terminates SSL and pipes to apache (app servers). Now, that's all good, but i'm yet to figure out how we can let nginx handle multiple domains using SSL. We're potentially looking at using UCC SSL Certs, so we can support 150 domains on a single certificate, with each cert on a single IP. I'm all new to this (My experience is just with physical load balancers and a single domains on SSL), so any advice would be very much appreciated.

    Read the article

  • Binding MySQL to run from the public or private LAN IP address - which one is faster

    - by Lamin Barrow
    So we have 2 servers all running at the same web host. We have bind MySQL to listen on the public ip-address of the database server and the web server connects to it from the public ip. Both servers run on the same private network. Currently, the DB connect method from our php script takes about 3ms to connect to the MySQL database server host. My question is, would MySql data interaction from the web server be faster if we bind it to listen on the private lan address on the database server instead of the public IP? or is it the same regardless and it wont make a different.

    Read the article

  • who deleted my files?

    - by akalter
    I have some linux servers. On two of our server we have MySQL. We have daily backup on both machine. But the scripts are different. I saw both scripts. On one of them I saw the "delete older files" algorithm, but in the other this is happening but not from the script. I am trying to discover who deletes my files, because of that I want to use same script on both machine because of that in the script with the deletion I also copy the files to the another server, and I want to do that in both servers. Who have an idea who deleted my older backups? Thank you!

    Read the article

  • Moving a file using PuTTY

    - by Paul Trotter
    I am newbie struggling to move a file on a Linux VPS using PuTTY. I can log in with a user in PuTTY at this point I can navigate to see the file I wish to move (~/servers/apache-solr-3.6.2/example/webapps/solr.war). By using cd .. a couple of times from the directory I begin at when I first log in to PuTTY I can then navigate to the location I wish to move the file to: usr/local/jakarta/apache-tomcat-5.5.36/webapps/ I know that I need to use cp to copy the file and have tried variations on: cp ~/servers/apache-solr-3.6.2/example/webapps/solr.war usr/local/jakarta/apache-tomcat-5.5.36/webapps However each time I get 'No such file or directory' I have tried excluding the ~/ and the start and I have tried specifying solr.war at the end of the command. Please excuse the newbie question, but I would really appreciate some advice on what I am doing wrong here.

    Read the article

  • Advice for an EC2 Architecture and Deployment Strategy

    - by Mark
    My company is currently migrating several websites and PHP web applications (standard LAMP stack) from three in-house servers to Amazon EC2. Because we had only three servers, we clustered several low-traffic websites with perhaps one high-traffic web application, and served them from the same server. The server admin has pretty much copied the previous architecture wholesale onto the EC2 instances, simply upping the instance size to account for the highest traffic client that occupies that particular instance. This architecture might be okay if it wasn't for deployment. Any time one of these sites/apps changes, it means redeploying the entire instance, along with the 30 sites/apps it hosts, instead of just updating one. How can we architect our cloud in a more modular fashion? Should each app get its own appropriately-sized instance? What is the best strategy for deployment in this type of situation?

    Read the article

  • Simple Central Storage for HA mail server

    - by jtnire
    Hi Everyone, I will have 2 Postfix servers. One will be a backup of the other. What is the easiest method to provide central storage to both of these boxes? My infrastructure is very simple: Just a lot of Xen hosts, so there is no SAN or anything. Each Xen host does have RAID1 though. I don't mind mounting NFS shares on each of those mail servers, as long as the NFS server wasn't a single point of failure. Is there such a thing as redundant NFS? Any help would be appreciated Thanks

    Read the article

  • Sharing files between multiple computers?

    - by Koalatea
    At my school, we have 13 iMacs that we use to make our yearbook. Currently our school has some servers for us, but since we work with so many files ( thousands of pictures, most of which are ~3MB ) it slows down far too much. Is there a way to better share files between our computers? We are on a wireless network and the whole school shares the same servers, we have around probably 400 computers in the school. Is there a hardware fix I can do? Something like buying an external and hooking only yearbook computers to it?

    Read the article

  • Terminal Server 2003 Login Issue - Insufficient system resources exist to complete the requested ser

    - by LP
    Afternoon. We have three identical terminal servers running Windows Server 2003 SP2, on these servers there are about 250 concurrent users logged on. We're running Roaming Profiles on a central server running Active Directory which cache the profiles locally on each terminal server as well. When one, and just that one, user tries to login she gets this error message (roughly translated from Swedish): "You could not be logged in becouse your principle could not be registered. Check that you're connected to the network or ask your administrator Insufficient system resources exist to complete the requested service." Anyone have an idea about this? I'm stumped ... Best Regards LP

    Read the article

  • Linux server failover

    - by Lukasz
    I have two Linux servers (CentOS6) - both are identically configured connected to the same switch with a direct link between them. I only have one external IP that is assigned to eth0 on both servers (connected to the internet switch) with the interface shutdown on server 2. How can I failover to server 2 if server 1 dies - as stated they are linked directly so they can check for availability of each other via ping/tcp/udp. I toyed with Heartbeat but the documentation seems to be non-existent - not sure how to bring up an interface and start some services if the other server dies.

    Read the article

  • Keeping folder of files in sync over 3 machines

    - by Wizzard
    Morning, Got 3 machines that have user content on them, which I need to keep in sync. This is a 3 way sync. Currently I run rsync but we just don't handle deletes. Have looked at something like gluster, but that seems a little over the top Any other software out there to do a 3 way sync, or a good network file system...? There is for web servers so we don't want a slow / IO hungry process. 3 servers... user content could be added to 1 and needs to be moved to other two.

    Read the article

  • Windows services not starting automatically?

    - by Jeff Atwood
    We've had some nasty time sync problems on our Windows Server 2008 R2 servers lately. I traced this back to something very simple: the Windows Time Service was not started! The time can't possibly sync via NTP when the time service isn't running... The Windows Time Service was set to start "automatically" in the services control panel, which I double and triple checked. I also checked the event logs and I didn't see any service failures or anything like that. In fact, it looked a heck of a lot like the Windows Time Service never started up automatically after the weekly Windows Updates were installed and the servers were rebooted. (this is set to happen every Saturday at 7 PM.) The minute I started the Time Service, the time synced fine. So, then, the question: why would a service set to start "Automatically" ... not be started automatically? That seems sort of crazy to me.

    Read the article

  • migration of physical server to a virtual solution, what i have to do?

    - by bibarse
    Hello I'm new in this forum, so i would like that you forgive me for my blissfully and my low English level. I'm a trainee in company one month ago, and my mission is to migrate 3 physicals servers to a virtualization technology. The company edit softwares for E-learning so there are lots of data like videos, flash and compressed (zip). This is some inventory of the servers: OS: Debian, 2 redhat, apache, php/mysql, sendMail/Dovecot, webmin with virtualmin template to create dynamically the web sites because there is no sysadmin ... The future provider will be responsible of to secure, update and create the virtual machines (outsourcing) and with a RedHat OS's. So i want that you help me to choose a virtualisation technologie (for the i prefer KVM of Redhat RHEV, VMWare is expensive), how evaluate the hardware needs (this for evolution of 4 or 5 years) and to elaborate a good planing to don't forget any think. Thank you for your responses.

    Read the article

  • Load Balancing and High Availability for Web Site

    - by nzgirl
    We've developing a database driven (70%/30% read/write load) website using C#.NET, IIS and MS SQL Server 2008 to be hosted on Windows 2008. Due to contractual reasons our setup has to be hosted on our own physical/virtual servers instead of a cloud solution at this stage. Could someone outline or link to some best practices that would provide both high availability (priority at the moment) and eventually load balancing for our site. We're probably looking at some sort of 2 SQL server mirrored system and 2 ISS web servers to start with. Thanks in advance.

    Read the article

  • Reverse Proxy Server SSL?

    - by valveLondon
    Context We currently have an Apache web server in the DMZ set up as a reverse proxy and load balancer for two machines running Windows Server 2008 (IIS) inside. The Apache server has a genuine SSL certificate and serves up both http and https, however, the balancer members in the load balancing section are set to: BalancerMember {https://server1} and {https://server2}. The IIS web servers have self-signed certificates in order to respond to the https requests. My question: Do we need to forward any requests from Apache (in the DMZ) to the inside using SSL? e.g can the reverse proxy forward the requests using HTTP? and if so, why would I choose to forward them with SSL? (how secure is the http line between the dmz and the inside); In other words, can I totally disable SSL on my inside web servers?

    Read the article

  • Access Windows VPN DNS from Ubuntu

    - by user46427
    I am using Ubuntu 10.04 to access a Windows VPN. I connect to the VPN from Ubuntu, and when I open a Windows 7 virtual machine (VirtualBox), everything works great ... I can access local network drives, ping local servers, remote into local machines, etc. However, I can do none of this from Ubuntu. With the VPN connected, I cannot even ping anything within the VPN local network. I'm guessing it's a DNS issue that Windows is handling automatically but Ubuntu needs a setting somewhere to tell it to use the DNS servers of the VPN network? Any ideas? I'm a relative novice to Ubuntu, esp. VPN in Ubuntu. [EDIT] Actually, I'm almost positive it is DNS, because if I get the IP address from the Windows VM I can use Terminal Server Client to remote into a machine.

    Read the article

  • What causes Remote Desktop Services Manager to crash in Server 2008 R2?

    - by milkmood
    I have this consistent problem of RDSM crashing in Server 2008 R2. It is either really slow to open, sometimes never opens, or after it's been open and working properly for a bit, stops working, and forces an unload of the snap-in. It's done this since the deployment of this server, new hardware, new instance of S2k8. Domain Administrator login. I am using it to manage 3 Terminal Servers, the other two are S2k3. I've used it without issues on other 2008 servers.

    Read the article

  • Why can't we reach some (but not all) external web service via VPN connection?

    - by Paul Haldane
    At work (UK university) we use a set of Windows servers running WS2008R2 and RRAS which offer VPN service to students in our accommodation. We do this to associate the network connections with individuals. Before they've connected to the VPN all they can talk to is the stuff thats needed to setup the VPN and a local web site with documentation on how to connect. Medium term we'll probably replace this but it's what we're using at the moment. VPN on the 2008 servers allocates client a private (10.x) address. Access to external sites is through NAT on the campus routers (same as any other directly connected client on a private address). Non-VPN connections aren't seeing this problem. Older servers run WS 2003 and ISA2004. That setup works but has become unreliable under load. Big difference there was that we were allocating non-RFC1918 addresses to the clients (so no NAT required). Behaviour we're seeing is that once connected to the VPN, clients can reach local web sites (that is sites on the campus network) but only some external sites. It seems (but this may be chance) that the sites we can reach are Google ones (including YouTube). We certainly have trouble reaching Microsoft's Office 365 service (which is a pain because that's where mail for most of our students is). One odd bit of behaviour is that clients can fetch (using wget on a Windows 7 client) http://www.oracle.com/ (which gets a 301 redirect) but hangs when asked to fetch http://www.oracle.com/index.html (which is what the first URL redirects to). Access works reliably if we configure clients to use our local web proxies (Squid). My gut tells me that this is likely to be something in the chain dropping replies either based on HTTP inspection or the IP address in the reply. However I'm puzzled about why we're seeing this with the VPN clients. Plan for tomorrow (when I'm back in the office) is to setup a web server on external connection so that we can monitor behaviour at both ends of the conversation (hoping that the problem manifests itself with our test server). Any suggestions for things we should be looking at?

    Read the article

  • Allow Internet Access with Default Gateway on Windows 7 VPN Server

    - by Hakoda
    I have a Windows 7 box at home (which I'll refer to as Home-VPN) that runs a simple PPTP VPN server. I have a range of 2 IP address (192.168.1.10-192.168.1.11) to give out, although the server is only able to give out one concurrent connection. Ports 1723 & 47 are correctly forwarded to the server. IPv6 is disabled on both Home-VPN and the client. I setup Home-VPN just like this Youtube video: http://www.youtube.com/watch?v=1s5JxMG06L4 I can connect to it just fine but I can't access the Internet when connected to Home-VPN, all outside web servers (eg. google.com, mozilla.org, apple.com) are unreachable. I know I can uncheck "Use Default Gateway on Remote Servers" on the client side under IPv4 settings but that will route all my traffic through my current connection, rather than through the VPN, defeating the purpose of said VPN. Any ideas on how I can fix this?

    Read the article

  • Should I install an AV product on my domain controllers?

    - by mhud
    Should I run a server-specific antivirus, regular antivirus, or no antivirus at all on my servers, particularly my Domain Controllers? Here's some background about why I'm asking this question: I've never questioned that antivirus software should be running on all windows machines, period. Lately I've had some obscure Active Directory related issues that I have tracked down to antivirus software running on our domain controllers. The specific issue was that Symantec Endpoint Protection was running on all domain controllers. Occasionally, our Exchange server triggered a false-positive in Symantec's "Network Threat Protection" on each DC in sequence. After exhausting access to all DCs, Exchange began refusing requests, presumably because it could not communicate with any Global Catalog servers or perform any authentication. Outages would last about ten minutes at a time, and would occur once every few days. It took a long time to isolate the problem because it was not easily reproducible and generally investigation was done after the issue resolved itself.

    Read the article

  • How failover should work in IIS cluster with Application Request Routing?

    - by username
    I have set up several servers with IIS and connected them to the load balancer - server with installed IIS Application Request Routing. I have created a server farm and added two servers. Then I stopped IIS on the first server and tried to open my web site. It returned me an error: 502 - Web server received an invalid response while acting as a gateway or proxy server. But if instead of stopping IIS I shut down the first server, I'm getting a response from the next server which is online. The question is, what the expected behaviour should be for failover with ARR, should it switch me to the next server if IIS is stopped and server is online?

    Read the article

  • CNAME lookup failed temporarily. (#4.4.3)

    - by klickverbot
    A friend of mine just told me that he can't send mails to accounts on one of my servers via the SMTP server provided by his ISP. The error message in the bounce he gets reads: Hi. This is the qmail-send program at aon.at. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: CNAME lookup failed temporarily. (#4.4.3) I'm not going to try again; this message has been in the queue too long. Any ideas what could be the reason for this? I have double-checked the DNS records for my domain, but they seem perfectly fine, and from any other mail servers I tested, delivery works flawlessly…

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >