Search Results

Search found 12072 results on 483 pages for 'x86 servers'.

Page 142/483 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • What is the "real" difference between a NAS and NFS? Or, why pick a NAS device over "mere" NFS?

    - by warren
    From an end-user perspective, what is the difference between a NAS device and using NFS exports from a file server? They seem to accomplish the same end result. The difference between a SAN and other file storage is related (in my experience) to how they are connected to the server infrastructure. However, the difference between a NAS, connecting over a standard ethernet port, and NFS (sharing storage off specific servers, also over the network), seems more nebulous. Is there a good reason to pick a NAS filer over just running NFS on servers?

    Read the article

  • How much does a IPtables router slow down a connection?

    - by RayQuang
    Hi, I would like to know if introducing a new gateway in my network will slow things down. The question may sound unclear but here is an illustration: Before Installing gateway server Main Router <=> switches <=> servers after installing gateway Server Main Router <=> IPtables router <=> switches <=> servers My question is. How much will this delay incoming outgoing requests / file transfers. thanks, RayQuang

    Read the article

  • Will this SPF record restrict delivery of email for the original domain?

    - by user199421
    As part of the product we offer we send emails on behalf of our clients. Because the emails don't come from an IP associated with the client they are sometimes flagged as spam. We advised some of our clients to add an SPF record approving us to send emails on their behalf. We saw immediate improvement in deliverability rates after making the change however one of our clients was notified by his hosting provider that the SPF record we suggested to add would "slightly restrict" all emails that don't come from our servers (including our client's own servers). The record we use is this: v=spf1 a mx include:ourdomain.com ~all So my question is if the warning we received about this is correct and if so why and what can be done to solve this (allow sending email both from original domain and by ourselves).

    Read the article

  • Can't connect to server from certain machines

    - by Joel Coel
    On a small college campus we have a VLAN setup for the computer labs. These machines get assigned IP addresses in the 192.168.7.xxx range. In the server room, all of the server are on the default VLAN and assigned an IP address in the 10.1.1.xxx range. For the most part this works, but the lab machines are unable to connect to one of the servers. They can't even ping it. They can talk to other servers on the same switch as this server just fine. At first I thought it might be a vlan issue, but I changed the server port vlan to match other known-working ports with no effect. Any ideas?

    Read the article

  • Safe to use high port numbers? (re: obscuring web services)

    - by sofakng
    I have a small home network and I'm trying to balance the need for security versus convenience. The safest way to secure internal web servers is to only connect using VPNs but this seems overkill to protect a DVRs remote web interface (for example). As a compromise, would it be better to use very large ports numbers? (eg. five digits up to 65531) I've read that port scanners typically only scan the first 10,000 ports so using very high port numbers is a bit more secure. Is this true? Are there better ways to protect web servers? (ie. web guis for applications)

    Read the article

  • Safe to use high port numbers? (re: obscuring web services)

    - by sofakng
    I have a small home network and I'm trying to balance the need for security versus convenience. The safest way to secure internal web servers is to only connect using VPNs but this seems overkill to protect a DVRs remote web interface (for example). As a compromise, would it be better to use very large ports numbers? (eg. five digits up to 65531) I've read that port scanners typically only scan the first 10,000 ports so using very high port numbers is a bit more secure. Is this true? Are there better ways to protect web servers? (ie. web guis for applications)

    Read the article

  • How to point subdomain to a nameserver?

    - by vonconrad
    I've got an old crusty WHM/cPanel server which I'm trying to get rid of. I've got a new setup on shared hosting which is much cheaper in the long run. The problem is that there are a bunch of websites on the server whose domains I don't have access to. They're currently pointing to name servers of my domain (ns.mydomain.com), but the new provider has their own name servers (ns.provider.com) which I have to use instead. My initial idea was to set up a CNAME to point my name server to my provider's: ns.mydomain.com CNAME ns.provider.com, but I read in this question that this would be a bad idea. The accepted answer suggests using an A record instead, and I want to make sure how this would work. Assuming ns.provider.com has an IP address of 123.123.123.123, is it just a matter of doing ns.mydomain.com A 123.123.123.123? Is there any way the provider could block those requests as the name server domain technically doesn't belong to them?

    Read the article

  • Rsync to take the newest file. And a cron job?

    - by user1704877
    I have a log file on two different servers. The servers are under a load balancer so half the traffic goes to one server, and half the traffic goes to the other server. I need to take the newest log file from one machine and transfer that log file to the other machine. So if one log file is changed on one server, it gets updated on the other server. I think I need to use rsync. And do I also need to put it in a cron job?

    Read the article

  • Postfix cleanup daemon access control

    - by Flimzy
    Is there any way to control which hosts are permitted to connect to the cleanup daemon over TCP? Our 'master.cf' contains: 2526 inet n - - - 0 cleanup This is necessary because we have a cluster of SMTP servers running custom code, and they can all inject mail to the centralized postfix server via the cleanup daemon. However, we want to allow only our authorized servers to connect to the cleanup daemon. The current configuration allows any host to connect to port 2526. Clearly we can use iptables to restrict access, but is there a way to do this within postfix itself?

    Read the article

  • Updating Applications in a Corporate Environment

    - by user145133
    I am very new to this subject and was hoping someone could shed some light on it. I am working on creating a corporate network that will obviously have multiple servers and multiple workstations. Let's say a new version of Adobe Flash comes out. I would think that you would want to test this update in a test environment before "pushing it out" to the servers and workstations. How do you guys go about controlling, testing and then pushing the application updates out? (i am not talking about windows updates). Do you use a 3rd party sysadmin tool? Home grown software? Any info will greatly be appreciated :)

    Read the article

  • TFS 2012 or TFS Azure (Preview)

    - by Fore
    We want to migrate our current TFS 2010 solution that's hosted today in one of our own servers to TFS 2012 hosted somewhere else. We don't want to handle the servers any more, and therefor are looking at alternatives. TFS preview / Azure is one alternative, hosted in the cloud, but I'm not that happy with forcing users to use live id, and we don't have an AD. My second thought was to create a Azure virtual mashine, and there install and host TFS 2012. Is there any downsides with this? Compared to the price of bying a VPS this is cheap and feels reliable in Azure? Do you have any other ideas?

    Read the article

  • Nginx load distribution and multi-domain SSL

    - by Steve Clark
    I'm researching into the best methods of two new parts of our infrastructure, hopefully finding a single solution for both. 1) We're currently running a single application server, and we're going to be adding an additional application server and load balance between the two. 2) We handle a few thousand domains across the application server(s), and we're looking to support SSL. The best method i've come across so far is using nginx for it's Load Distribution to serve the requests to the application servers, and for it's SSL support. If a request is using SSL, nginx accepts the request on, terminates SSL and pipes to apache (app servers). Now, that's all good, but i'm yet to figure out how we can let nginx handle multiple domains using SSL. We're potentially looking at using UCC SSL Certs, so we can support 150 domains on a single certificate, with each cert on a single IP. I'm all new to this (My experience is just with physical load balancers and a single domains on SSL), so any advice would be very much appreciated.

    Read the article

  • who deleted my files?

    - by akalter
    I have some linux servers. On two of our server we have MySQL. We have daily backup on both machine. But the scripts are different. I saw both scripts. On one of them I saw the "delete older files" algorithm, but in the other this is happening but not from the script. I am trying to discover who deletes my files, because of that I want to use same script on both machine because of that in the script with the deletion I also copy the files to the another server, and I want to do that in both servers. Who have an idea who deleted my older backups? Thank you!

    Read the article

  • Binding MySQL to run from the public or private LAN IP address - which one is faster

    - by Lamin Barrow
    So we have 2 servers all running at the same web host. We have bind MySQL to listen on the public ip-address of the database server and the web server connects to it from the public ip. Both servers run on the same private network. Currently, the DB connect method from our php script takes about 3ms to connect to the MySQL database server host. My question is, would MySql data interaction from the web server be faster if we bind it to listen on the private lan address on the database server instead of the public IP? or is it the same regardless and it wont make a different.

    Read the article

  • All email directed to 3rd party vendor except for one specific domain. How?

    - by jherlitz
    So we setup a site to site vpn tunnel with another company. We then proceeded to setup a DNS zone on each others dns servers and entered in each others Mail server name and IP, MX record and WWW record. This allowed us to send emails to each others mail servers through the site to site vpn. Now recently the other company started using MX Logic to scan all outbound and incoming mail. So all outbound email is directed to MX Logic. However we still want email between us to travel across the the Site to Site VPN tunnel. How can we specify that to happen for just one domain not to be directed to MX Logic? Stump on both ends and looking for help.

    Read the article

  • Advice for an EC2 Architecture and Deployment Strategy

    - by Mark
    My company is currently migrating several websites and PHP web applications (standard LAMP stack) from three in-house servers to Amazon EC2. Because we had only three servers, we clustered several low-traffic websites with perhaps one high-traffic web application, and served them from the same server. The server admin has pretty much copied the previous architecture wholesale onto the EC2 instances, simply upping the instance size to account for the highest traffic client that occupies that particular instance. This architecture might be okay if it wasn't for deployment. Any time one of these sites/apps changes, it means redeploying the entire instance, along with the 30 sites/apps it hosts, instead of just updating one. How can we architect our cloud in a more modular fashion? Should each app get its own appropriately-sized instance? What is the best strategy for deployment in this type of situation?

    Read the article

  • Moving a file using PuTTY

    - by Paul Trotter
    I am newbie struggling to move a file on a Linux VPS using PuTTY. I can log in with a user in PuTTY at this point I can navigate to see the file I wish to move (~/servers/apache-solr-3.6.2/example/webapps/solr.war). By using cd .. a couple of times from the directory I begin at when I first log in to PuTTY I can then navigate to the location I wish to move the file to: usr/local/jakarta/apache-tomcat-5.5.36/webapps/ I know that I need to use cp to copy the file and have tried variations on: cp ~/servers/apache-solr-3.6.2/example/webapps/solr.war usr/local/jakarta/apache-tomcat-5.5.36/webapps However each time I get 'No such file or directory' I have tried excluding the ~/ and the start and I have tried specifying solr.war at the end of the command. Please excuse the newbie question, but I would really appreciate some advice on what I am doing wrong here.

    Read the article

  • Simple Central Storage for HA mail server

    - by jtnire
    Hi Everyone, I will have 2 Postfix servers. One will be a backup of the other. What is the easiest method to provide central storage to both of these boxes? My infrastructure is very simple: Just a lot of Xen hosts, so there is no SAN or anything. Each Xen host does have RAID1 though. I don't mind mounting NFS shares on each of those mail servers, as long as the NFS server wasn't a single point of failure. Is there such a thing as redundant NFS? Any help would be appreciated Thanks

    Read the article

  • Sharing files between multiple computers?

    - by Koalatea
    At my school, we have 13 iMacs that we use to make our yearbook. Currently our school has some servers for us, but since we work with so many files ( thousands of pictures, most of which are ~3MB ) it slows down far too much. Is there a way to better share files between our computers? We are on a wireless network and the whole school shares the same servers, we have around probably 400 computers in the school. Is there a hardware fix I can do? Something like buying an external and hooking only yearbook computers to it?

    Read the article

  • Keeping folder of files in sync over 3 machines

    - by Wizzard
    Morning, Got 3 machines that have user content on them, which I need to keep in sync. This is a 3 way sync. Currently I run rsync but we just don't handle deletes. Have looked at something like gluster, but that seems a little over the top Any other software out there to do a 3 way sync, or a good network file system...? There is for web servers so we don't want a slow / IO hungry process. 3 servers... user content could be added to 1 and needs to be moved to other two.

    Read the article

  • Linux server failover

    - by Lukasz
    I have two Linux servers (CentOS6) - both are identically configured connected to the same switch with a direct link between them. I only have one external IP that is assigned to eth0 on both servers (connected to the internet switch) with the interface shutdown on server 2. How can I failover to server 2 if server 1 dies - as stated they are linked directly so they can check for availability of each other via ping/tcp/udp. I toyed with Heartbeat but the documentation seems to be non-existent - not sure how to bring up an interface and start some services if the other server dies.

    Read the article

  • Windows services not starting automatically?

    - by Jeff Atwood
    We've had some nasty time sync problems on our Windows Server 2008 R2 servers lately. I traced this back to something very simple: the Windows Time Service was not started! The time can't possibly sync via NTP when the time service isn't running... The Windows Time Service was set to start "automatically" in the services control panel, which I double and triple checked. I also checked the event logs and I didn't see any service failures or anything like that. In fact, it looked a heck of a lot like the Windows Time Service never started up automatically after the weekly Windows Updates were installed and the servers were rebooted. (this is set to happen every Saturday at 7 PM.) The minute I started the Time Service, the time synced fine. So, then, the question: why would a service set to start "Automatically" ... not be started automatically? That seems sort of crazy to me.

    Read the article

  • Terminal Server 2003 Login Issue - Insufficient system resources exist to complete the requested ser

    - by LP
    Afternoon. We have three identical terminal servers running Windows Server 2003 SP2, on these servers there are about 250 concurrent users logged on. We're running Roaming Profiles on a central server running Active Directory which cache the profiles locally on each terminal server as well. When one, and just that one, user tries to login she gets this error message (roughly translated from Swedish): "You could not be logged in becouse your principle could not be registered. Check that you're connected to the network or ask your administrator Insufficient system resources exist to complete the requested service." Anyone have an idea about this? I'm stumped ... Best Regards LP

    Read the article

  • migration of physical server to a virtual solution, what i have to do?

    - by bibarse
    Hello I'm new in this forum, so i would like that you forgive me for my blissfully and my low English level. I'm a trainee in company one month ago, and my mission is to migrate 3 physicals servers to a virtualization technology. The company edit softwares for E-learning so there are lots of data like videos, flash and compressed (zip). This is some inventory of the servers: OS: Debian, 2 redhat, apache, php/mysql, sendMail/Dovecot, webmin with virtualmin template to create dynamically the web sites because there is no sysadmin ... The future provider will be responsible of to secure, update and create the virtual machines (outsourcing) and with a RedHat OS's. So i want that you help me to choose a virtualisation technologie (for the i prefer KVM of Redhat RHEV, VMWare is expensive), how evaluate the hardware needs (this for evolution of 4 or 5 years) and to elaborate a good planing to don't forget any think. Thank you for your responses.

    Read the article

  • Load Balancing and High Availability for Web Site

    - by nzgirl
    We've developing a database driven (70%/30% read/write load) website using C#.NET, IIS and MS SQL Server 2008 to be hosted on Windows 2008. Due to contractual reasons our setup has to be hosted on our own physical/virtual servers instead of a cloud solution at this stage. Could someone outline or link to some best practices that would provide both high availability (priority at the moment) and eventually load balancing for our site. We're probably looking at some sort of 2 SQL server mirrored system and 2 ISS web servers to start with. Thanks in advance.

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >