Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 146/537 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Can someone correct me about Windows 2008 DFS implementation

    - by cwheeler33
    I have 2 two Windows 2003 file servers using DoubleTake at the moment. They are in an 2003 AD domain. And each server has it's own disk set. It is time to replace the servers... I want to use Windows 2008 64bit Ent. I was thinking of using DFS-R to replace DoubleTake. The part I'm not sure about is clustering. Do I need to have shared storage if each server has a copy of the data? I want to have the data available to the same share name, so maybe I don't fully understand how DFS is set up. I currently have about 6TB of data. I expect to grow by 3TB a year on these file servers. Any resources/books that could teach me would be good to know as well.

    Read the article

  • Help with proposed iSCSI SAN VMware implementation.

    - by obsidian
    We have four (with plans to grow to four more) Dell servers with 6 NICs. They are running VMware ESXi 4.1. We would like to connect all of them to an Openfiler iSCSI SAN via HP ProCurve 1810G switches. Based on the design below, is there anything I should be concerned about or anything unusual that I should look out for when making the iSCSI network configurations on the servers, switches and OpenFiler? Should I bond the connections on the servers or simply setup them up for failover? The primary goal is to maximize IOPS. Thanks in advance.

    Read the article

  • How do you manage large web farms?

    - by Andrew Katz
    I have a quickly growing web farm running IIS 7 (30+ servers). All servers are identical copies of each other and all servers are physical. We update the software about once a month, and in the current process, we follow the following steps: Disable server from pool on F5 load balancer. Disable HTTP Keep-alives in IIS so connections drop quickly. Change default directory of website to new folder containing new binaries. Test server Enable HTTP Keep-alives. Enable server in F5 pool. Move to server 2 Microsoft used to have Application Center which was abandoned a while ago. They have made a second attempt with the Web Farm Framework, but this adds as much QA time testing the release package as it saves in the deployment. Has anyone seen a commercial off the shelf application that is tailored for managing and deploying to large web farms? Thanks!

    Read the article

  • Debian Linux server hangs after a week or so

    - by Alex Flo
    I have 2 Debian Linux 6.0.4 servers that have a strange behaviour: after 5-7-10 days they hang. By this I mean the servers need to be restarted and before that ping won't answer. I've been struggling with this problem for a couple of months now and here's some thoughts/what I tried without being able to solve the problem. I changed the RAM on a server. Being 2 different servers I doubt that it could be something related to hardware as a 3rd identical server won't have this problem. I logged the server load and when it crashes the load is fine (quite low) I cannot find anything in the server logs, logs are fine till the server freezes. I don't have access to console unfortunately. While I have years of admin experience I have never encountered such an issue and right now I have no idea where else to investigate. If you have an idea of what I could try in order to fix the problem please share it with me:-)

    Read the article

  • How do you apply development practices like version control, testing and continuous integration/deployment to system administration?

    - by arex1337
    Imagine you're going to manage a number of servers with a number of different services that's used by a number of people. Now say you want to reconfigure or replace some software on one of those servers. Obviously you don't want to work on servers that are in production. If this was a code change, as a developer, I would make the change on my local development machine, test it locally and commit the change to a version control system. The changes could then be deployed in a staging environment, tested further and finally deployed in a production environment. It would also be easy for me to roll back, if necessary. Generally, or specifically, how do you achieve this in system administration? (The first thing that comes to mind is to use virtual machines and put virtual machine images in version control, but I'm sure there is a lot of literature and clever solutions I'm not presently aware of.)

    Read the article

  • concrete uses of LDAP?

    - by ajsie
    im new to LDAP. i wonder what are some concrete examples of using LDAP. things that are MUCH more easier to do when you got 3-7 linux computers in a small company network. one use that is very important for me seems to be that you configure LDAP to handle system authentication. then you dont have to create same accounts in all computers. are there other things that are a MUST DO for a small network to save more time? my small network is for apache servers and database servers. and should LDAP be in an own machine? cause i guess its not good to put it in apache or database servers since these are performance dependent.

    Read the article

  • sonicwall nsa 240

    - by Adam
    Hi We are looking into putting a hardware firewall into a data center to protect our rack of servers. We are using the servers for terminal services and we have 2 x 1GB connections to the Internet. We have about 50 servers supporting about 250 users which will grow very soon to 500 users. We plan to purchase 2 hardware firewalls to provide HA. Do you think the Sonicwall NSA 240 with Total Secure is a good match for this in terms of performance and protection (from spyware, virus etc?) or is there a better purchase? (Maybe a Watchguard X5 or X8?)

    Read the article

  • How to scale out OpenStreetMap data efficiently

    - by Pierre
    For over a year now, I'm running an in-house PostGIS server filled with OSM data, used for both Mapnik-based tile generation and Nominatim-based geocoding, updated with day replicates. This works pretty well. However, as usage is growing exponentially, I would like to achieve better reliability and performance by adding additional PostgreSQL servers. And I'm kind of lost. Since PostgreSQL doesn't seem to handle replication by itself, I would think about using a piede of middleware like PgPool-II to keep the servers in sync. But I'm afraid it would be nothing but necessary for this usage : very high read-to-write ratio, where all writes are done at the same exact time every day. My questions are simple : What would you do to keep these servers in sync? And, what is done for this at the OpenStreetMap Foundation, MapQuest, Mapbox or CloudMade? Thanks.

    Read the article

  • Code deploy system [closed]

    - by Turnaev Evgeny
    Currently we deploy code to servers in a various ways: freebsd package freebsd ports part of config files and static just svn up'ed and a symlink is changed to new upped folder The distribution of freebsd packages to target servers is done through custom tool that uses ssh. I am looking for a code deploy system that will allow: deploy several packages (freebsd or linux) atomic (ether deploy all or none of them to server) can save a history of last stable version - so in case of bad deploy i can easily rollback to last working version all servers ease deployment of config and static files - and integrate those into atomic deploy/rollback system. should work with freebsd or linux (apt-get system)

    Read the article

  • Can a MySQL slave be a master at the same time?

    - by mmattax
    I am in the process of migrating 2 DB servers (Master & Slave) to two new DB Servers (Master and Slave) DB1 - Master (production) DB2 - Slave (production) DB3 - New Master DB4 - New Slave Currently I have the replication set up as: DB1 -> DB2 DB3 -> DB4 To get the production data replicated to the new servers, I'd like to get it "daisy chained" so that it looks like this: DB1 -> DB2 -> DB3 -> DB4 Is this possible? When I run show master status; on DB2 (the production slave) the binlog possition never seems to change: +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000020 | 98 | | | +------------------+----------+--------------+------------------+ I'm a bit confused as to why the binlog position is not changing on DB2, Ideally it will be the master to DB3.

    Read the article

  • Replacing sick NTP server source and re-synching (with internal time currently 2 minutes late)

    - by l0c0b0x
    One of the external NTP servers (the primary one--currently) we're using as source seems to not be responding to NTP calls. Unfortunately, on our core router (Cisco 6509), the NTP functionality hasn't switched to the secondary NTP external server as it was expected. As a result, our core router which is pretty much our main internal NTP source is 2 minutes late. I'm planning to fix the external router issue by making the external NTP source be the one currently working. I'm wondering, how much will a 2 minute change affect my users and services? Specially since these days, we're heavily relying on certificate-based authentication. We're a Windows/Cisco shop. Internal NTP setup: [Core Router 1 / Cisco 6509]: looking out to two external NTP servers (in which the primary one is not responding to NTP calls) [Core Router 2]: Synching with Core router 1 (primary), working external router (secondary) [Other Cisco network devices]: Synching with Core router 1 (primary), core router 2 (secondary) [Domain controller(s)]: Synching with Core router 1 [All windows clients/servers]: Synching with domain controllers

    Read the article

  • DFS-R + Volume Shadow Services - Run VSS at all locations or one?

    - by pbarranis
    I have 4 servers, one at each office location, each sharing read/write copies of 1.5TB of data. Changes are replicated 24/7 between all 4 servers via DFS-R. All servers run Windows 2008 R2. I'm interested in implementing VSS (Volume Shadow Services) on this data. I've read that DFS-R and VSS play nicely together, but I'm left with one unanswered question: turn on VSS at all locations or just one (the headquarters). Can I run VSS at all 4 locations safely, or is it wiser to run it at just 1 location? Thanks!

    Read the article

  • httpd service keep restarting. after 15-20 mins

    - by niraj
    I have recently purchased Dedicated Server which has 16bg ram and 1TB Harddisk. It has Cpanel and for firewall CSF Installd. I am mainly going to install it for File hosting service. Now the day i moved my httpd service keep restarting every 15-20 mins. It becomes unresponsive after that so have to manually restart it. My httpd settings are Start Servers = 5 Minimum Spare Servers = 5 Maximum Spare Servers = 10 Server Limit = 20000 Max Clients = 10000 Max Requests Per Child = 10000 Keep-Alive = On Keep-Alive Timeout = 5 Max Keep-Alive Requests = Unlimited Timeout 300 TOP is top - 14:53:41 up 1 day, 23:39, 2 users, load average: 0.10, 0.14, 0.09 Tasks: 1563 total, 1 running, 1562 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.6%sy, 0.0%ni, 98.1%id, 0.2%wa, 0.0%hi, 0.5%si, 0.0%st Mem: 16303780k total, 16142048k used, 161732k free, 135264k buffers Swap: 8224760k total, 868k used, 8223892k free, 14136616k cached Please help me in this its keep happning.

    Read the article

  • Experience with MQ File Transfer Edition?

    - by mfinni
    We've got several processes that move files across servers - SFTP, FTP, SCP; Windows, Linux, AIX; there is a workflow component (usually require a control file with filenames and hash values to move a batch of related files). The action is often initiated on our servers to get the files, so we need to make sure they're done being written. We have some homegrown scripts to do this, but they don't always work properly, and troubleshooting, maintenance, and log review is not easy this way. There's a lot of servers, and our scripts don't have central logging or a dashboard/console/etc. We're looking into commercial products to do this. Has anyone used MQ File Transfer Edition? Another team in our company is using Aspera, does anyone have any thoughts on that, or other favored products? I have no idea what our budget is for this, yet. Just trying to get a handle on the product space from the perspective of other admins.

    Read the article

  • a load balancing scenario using HAProxy and keepalived shows no performance advantage

    - by chakoshi
    Hi, I am trying to setup a load balanced web server scenario, using two HAproxy load balancers and two debian web servers following this guide http://www.howtoforge.com/setting-up-a-high-availability-load-balancer-with-haproxy-keepalived-on-debian-lenny. the setup is working but the results of simple performance benchmarking is not what I expected. I tried apache benchmark tool to send lots of requests to servers (one time directly testing one of the web servers and the other time testing through the load balancer) using the command "ab -n 1000000 -c 500 http://IP/index.html", but the test results shows better performance for the single server without load balancer. can any one tell me if I'm going wrong on some thing?

    Read the article

  • Routing application traffic through specific interface

    - by UnicornsAndRainbows
    Hello All! First question here, so please go easy: I have a debian linux 5.0 server with two public interfaces. I would like to route outbound traffic from one instance of an application via one interface and the second instance through the second interface. There are some challenges: both instances of the application use the same protocol both instances of the application can access the entire internet (can't route based on dest network) I can't change the code of the application I don't think a typical approach to load balancing all traffic is going to work well, because there are relatively few destination servers being accessed in the outbound traffic, and all traffic would really need to be distributed pretty evenly across these relatively few servers. I could probably run two virtualized servers on the box and bind each of them to a different external ip, but I'm looking for a simpler solution, maybe using iproute or iptables? Any ideas for me? Thanks in advance - and I'm happy to answer any questions.

    Read the article

  • Server monitoring for medium scale UNIX network

    - by nbartolomeo
    I'm looking for suggestions for a good monitoring tools, or tools, to handle a mixed Linux (RedHat 4-5) and HPUX environment. Currently we are using Hobbit which is working reasonably well but it is becoming harder to keep track of what alerts are sent out for what servers. Features I'd like to see: Easy configuration of servers. The ability to monitor CPU, network, memory, and specific processes I've looked into Nagios but from what I have seen it won't be easy to set up the configuration for all of our servers ~200 and that without installing a plugin into each agent I won't be able to monitor processes.

    Read the article

  • how to create stub DNS zone for emulating my customer production environment ?

    - by Albert Widjaja
    Hi All, Is it possible to emulate my customer production environment inside my AD domain by just creating the same domain inside my primary DNS server ? Can I created mycustomer.com DNS zone (STUB) just for the sake of listing few database servers and application servers and then for the other DNS records eg. MX, NS and the other refer to the REAL MX record entry so that my Exchange Server email flow is unaffected to mycustomer.com ? because if I just create A record in my current domain for some of the servers, the FQDN is not exactly what I want. Thanks.

    Read the article

  • How do you manage updates without a staging environment: CentOS 6.3

    - by Gregg Leventhal
    I am managing about 20 servers, many of them virtual. They are almost all different purpose, and none are clustered. I have a distributed LAMP stack, a few application servers, some build servers, a few KVM hosts. They are CentOS 6.3 mostly with a few Ubuntu (unfortunately). I don't have the resources to setup a staging environment where I can have duplicates of my machines and test updates before rolling them out. I am taking file backups. What I want to know is how you are approaching backing up your Linux systems. I assume you don't just do yum update, but then how are you choosing the packages worthy of updating? When (if ever) are you updating the kernel, etc.. How do you test updates without a staging environment? Snapshot and hope for the best?

    Read the article

  • Ubuntu 12.04 host lookups extremely slow

    - by tubaguy50035
    I'm having issues with one of my servers taking a long time to look up host names. This is an Ubuntu 12.04 box, so I've tried following the new resolvconf directives. In my /etc/network/interfaces file, I defined my name servers like this: auto eth0 iface eth0 inet static address someaddress netmask 255.255.255.0 gateway 198.58.103.1 dns-nameservers 74.14.179.5 72.14.188.5 In my /etc/resolv.conf, I see these name servers, like this: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 74.14.179.5 nameserver 72.14.188.5 On another box, I edited the resolv.conf directly as directed by my hosts' setup help files. It looks like this: domain members.linode.com search members.linode.com nameserver 72.14.179.5 nameserver 72.14.188.5 options rotate This second box has no issues with host name look ups and responds quite quickly. Could not having the domain and search directives make my look ups slow? By slow, I mean it's taking anywhere from 5 to 15 seconds to find the IP address of a host.

    Read the article

  • Slow website load with CNAME, fast when using IP

    - by Nate Strandberg
    I setup two DNS servers on my network: ns1.byte-werx.com && ns2.byte-werx.com I can ping the DNS servers and get a fairly good response time, when I dig them I also get a fairly reasonable response, but any website I filter through them is painfully slow (an upwards of 20+ seconds) -- verifiable by performing a tracert or attempting to access the URL in a browser. The DNS servers are running CentOS 6.3 and BIND9 with 500MB of memory (I figure that should be more than enough?). I have a reverse look-up zone (1.168.192) along with two website zones (www.byte-werx.com and www.stayhomedental.com) If I access the websites using their IP the page loads nearly instantly so I do not believe the issue is with the hosting server, but that is running Ubuntu Server 12.04 and Apache2 with 12GB memory. Any thoughts? I do not have the named.conf file in front of me but I can edit this post to include it if you feel it would be useful. Thanks for any advice!

    Read the article

  • How can I check the location of perl and CPAN files?

    - by Rob
    I constantly have to set up new servers for an employer of mine for an exact purpose of his, and as such they all have to be set up in exactly the same way. So I've created a script in PHP that I run from my own box to automatically send over all the relevant files, compile everything, run updates, and everything else. However, for some reason these brand new servers come with perl, which is fine, but they have perl installed in different locations. This makes it a pain for me to copy over Config.pm for CPAN without going in and finding the location manually. Is there perhaps some command I'm unaware of that will hunt down the precise location? If it helps, usually the servers are CentOS 5

    Read the article

  • SharePoint 2010 MySites - Simple explanation needed!

    - by Chris W
    I've been playing around with the 2010 beta for a couple of weeks, experimenting with topology options etc. I think I've got myself totally confused as to how it works hence if there's any SP experts out there that can explain things in simple terms for me I'd appreciate it! I want to setup a farm with 3 servers providing the content & MySites. I presume that the way to do this is to load balance or DNS round robin traffic between the 3 servers. The bit where I'm confused is that My Site Settings page asks for a specific My Site Host hence all my site traffic will be pushed to a single server even though we have 3 in the farm. If this hosts fails I presume MySites will be unavailable. Is this right? How do I configure it so that access to MySites is load balanced across the 3 servers in the farm?

    Read the article

  • How long do uploaded files stay in the tmp folder in Linux Ubuntu?

    - by Jean-Nicolas Boulay Desjardins
    I am building a web application where my users will be able to upload files. After the files are uploaded I need to send the files to two other servers, and after they will be deleted from the server where they were just uploaded to. I am wandering is it a good I idea to keep the uploaded files in the tmp/ folder the time the uploaded files are sent to the other two servers or should I move them to another folder incase they get deleted? I am also wandering because I would like to know if I have to build a cron script to get rid of the files that have been transfered to the other servers so that I get my disk space back.

    Read the article

  • How to allow an internal server accept remote connections not through RD Gateway

    - by Matt Ahrens
    So, I help administrate a collection of servers running various windows server environments. We have a RD Gateway server, properly configured, to gatekeep for us. It does not have the other servers listed in it's server farm category, though. I just added a refurbished server for a non-profit development environment that is sharing the rack space and port. I would like this server to be accessible via remote connection, but not require RD gateway certification (I cannot add the users for this development server to our gateway since they do not work for the organization hosting the rack.) Is there any way for me to add this dev. server as an exception to which servers should require RD Gateway clearance, or otherwise let users bypass RD gateway credentials for this one machine? Thanks, and let me know if I am misinformed on how RD gateway works or anything. I am still learning.

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >