Search Results

Search found 1815 results on 73 pages for 'percona xtradb cluster'.

Page 64/73 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • central apache log analysis of many hosts

    - by Jason Antman
    We have 30+ apache httpd servers, and are looking to perform analysis on the logs both for historical trending and near "real time" monitoring/alerting. I'm mainly interested in things like error rates (4xx/5xx), response time, overall request rate, etc. but it would also be very useful to pull out more compute-intensive statistics like unique client IPs and user agents per unit of time. I'm leaning towards building this as a centralized collector/server/storage, and am also considering the possibility of storing non-apache logs (i.e. general syslog, firewall logs, etc.) in the same system. Obviously a large part of this will probably have to be custom (at least the connection between pieces and the parsing/analysis we do), but I haven't been able to find much information on people who have done stuff like this, at least at shops smaller than Google/Facebook/etc. who can throw their log data into a hundred-node compute cluster and run Map/Reduce on it. The main things I'm looking for are: - All open source - Some way of collecting logs from apache machines that isn't too resource-intensive, and transports them relatively quickly over the network - Some way of storing them (NoSQL? key-value store?) on the backend, for a given amount of time (and then rolling them up into historical averages) - In the middle of this, a way of graphing in near-real-time (probably also with some statistical analysis on it) and hopefully alerting off of those graphs. Any suggestions/pointers/ideas, to either "products"/projects or descriptions of how other people do this would be greatly helpful. Unfortunately, we're not exactly a new-age-y devops shop, lots of old stuff, homogeneous infrastructure, and strained boxes.

    Read the article

  • Missing drive space in Server 2003

    - by Tim Brigham
    I have two drives used for SQL backups which for the last week have been acting strange - the free space indicated by windows is far off from what windirstat, etc indicates. There should only be about 60 GB of drive space used and there is about 160. This would match the utilization if the two last backup files were still residing on disk. SQL server is 2000, OS Server 2003 x64. Running on a VMware 5.0 cluster. OSSEC and McAfee for this system shows clean. My current plan is to temporarily attach one of these drives this drive to another VM for analysis. Is there anything more I should be looking at? There were a lot of pages on the net when I was looking for documentation on this issue but I haven't found this case described. EDIT: Unfortunately even a full reboot did not clear this behavior. I also used process explorer to look for open file handles. No dice.

    Read the article

  • Can Octopussy use messages other than syslog style?

    - by Lee Lowder
    I am currently exploring different options for a centralized log server. We use both Linux (Ubuntu 10.04 / 12.04, LTS for both) and Windows, though for this specific issue only Linux is relevant. I like the interface that octopussy has and it's feature list, but I am hesitant due to a few things. One of the biggest concerns I have is that it seems to be syslog only. The end goal is to have a centralized place for our devs and admins to be able to search through the logs generated by Apache, Tomcat and 70+ web apps spread out among a cluster, for both our prod and test environments. While I did see that octopussy has support for plugins, I haven't been able to find any sort of plugin repo or in depth guides as to what can be done with them. Does anyone know if plugins can be used to allow octopussy to non-syslog messages? Specifically log4j type log messages that may include multi-line stack traces and such. Also, is there a user community for this software, such as a mailing list or forum? I've been unable to locate any so far. Thank you.

    Read the article

  • VMWare use of Gratuitous ARP REPLY

    - by trs80
    I have an ESXi cluster that hosts several Windows Server VMs and around 30 Windows workstation VMs. Packet captures show a high number of ARP replies of the form: -sender_ip: VM IP -sender_mac: VM virtual MAC -target_ip: 0.0.0.0 -target_mac: Switch interface MAC The specific addresses aren't really a concern -- they're all legitimate and we're not having any problems with communications (most of the questions surrounding GARP and VMWare have to do with ping issues, a problem we don't have). I'm looking for an explanation of the traffic pattern in an environment that functions as expected. So the question is why would I see a high number of unsolicited ARP replies? Is this a mechanism VMWare uses for some purpose? What is it? Is there an alternative? EDIT: Quick diagram: [esxi]--[switch vlan]--[inline IDS]--[fw]--(rest of network) The IDS is complaining about these unsolicited ARPs. Several IDS vendors trigger on ARP replies without a prior request, or for ARP replies that have a target IP of 0.0.0.0. The target MAC in these replies is the VLAN interface on the switch. Capture points: -The IDS grabs the offending packets -The FW can see the same ones -A VM on the ESXi host does not see these, although there is an ARP request for a specific IP on the ESXi host that has source_ip=0.0.0.0 and source_mac=[switch vlan interface]. I can't share the captures, unfortunately. Really I'm interested in finding out if this is normal for an ESXi deployment.

    Read the article

  • Restoring MBR, partition table, and boot sector of memory card without data loss ("USBC")

    - by Synetech
    Abstract I have a FAT32 memory card that when inserted into a computer causes Windows to prompt to format it. The card is definitely not supposed to be blank and has a bunch of files on it. Symptoms Using a hex-editor/disk-viewer, I examined the card and found that several sectors/clusters have been overwritten with something that has a signature of USBC at the start of the sector. Specifically, the master boot record (and partition table) is gone (hence Windows thinking the card is blank and needing to be formatted), as are the boot sectors (they have the USBC signature and a volume label of NO NAME and partition type of FAT32). Fortunately, it looks like both copies of the FAT are almost entirely intact (a few FAT entries at the start of a cluster here and there seem to be overwritten by USBC). The root directory is also nearly intact—I can see the volume label entry and subdirectory listings, but one sector is overwritten. (There are no more instances of USBC after the last one in the FAT2.) Hypothesis These observations seem to indicate some sort of virus that erases a few key filesystem structures, and then overwrites a few extra sectors here and there. Googling it seems to corroborate the idea of a virus, except that others report a file called USBC which does not apply here, and in fact, could not be possible since there is no filesystem to even see files. I cannot find any information about a virus with these symptoms, nor a removal tool. (I can't help but wonder if it is actually due to an autorun virus prevention tool.) Question I can likely fix the FAT corruption since they are mostly contiguous chains and maybe even the lost sector of the root directory, but does anyone know of a convenient way to restore or (re)create the MBR/partition table and boot sectors (without formatting or overwriting the data)?

    Read the article

  • DRBD as a block device for XEN VM (Centos 5.3)

    - by SaberTooth
    Hi all, I have setup a drbd resource between 2 server nodes - everything works correctly when doing sync tests between the two. (I want to create a HA cluster using drbd,xen and heartbeat) However, when I try and create a XEN VM with Centos as guest operating system, I get through to the partitioning screen on the install but when I select a partitioning type the next screen gives me the following error : "An error has occurred - no valid devices were found on which to create new file systems. Please check your hardware for the cause of this problem." This is the first time attempting create a setup like this and searching Google does not help much... my config files for DRBD and XEN.... DRBD (just the section that is pertinent) on xennode0 { device /dev/drbd0; disk /dev/sda5; address X.X.X.X:7788; flexible-meta-disk internal; } on xennode1 { device /dev/drbd0; disk /dev/sda5; address X.X.X.X:7788; meta-disk internal; } XEN kernel = "/boot/xeninstall/vmlinuz" ramdisk = "/boot/xeninstall/initrd.img" extra = "text" name = "VM" maxmem = 3000 memory = 3000 vcpus = 4 on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ ] disk = [ "phy:/dev/drbd0,sda1,w", "tap:aio:/srv/xen/xenswap.img,sda2,w" ] vif = [ "mac=00:16:3e:11:67:ae,bridge=xenbr0" ] root = "/dev/sda1 ro" Thanks in advance!

    Read the article

  • Exceptional slowdown of robocopy copying from VM to DFS array

    - by user1588867
    I've got an old win 2003 VM (VMware) on a blade cluster of VMs that I'm moving a considerable amount of files to our new DFS array. There are two main folders with about 1.7 million and half a million smaller files (letters, memos, and other smaller files) respectively. Total size is ~420 GB and ~100 GB. We're using the gui version of robocopy on the server to copy the files. We had initiated a file copy about a month ago to test the process and found that it was taking around 4 hours for the large file. Now that I'm in the process of actually switching the files over it has been taking 18-20 hours. Nothing has changed on the server side and nothing has changed on the settings of the copy (no logs, 1 retry with a wait of 1 second). Our intent is to shut off the share and force the copy over again to get all the files that have been left out of the copy due to being locked by users. I can't take a 20 hour outage to do that though. Does anyone have any theories about what could be causing such a delay for robocopy compared to previously shorter runs?

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • ssh hangs on "Last login" line

    - by Pavel H
    This happened for the first time three days ago - I ssh to the server, authenticate using a password, get the welcome message but it remains hanging on the "Last login:..." line. The command line doesn't show and the server doesn't react to my input. Other services on the server keep working ok (apache, tomcat, database, ..). The box has an out-of-band management using which I was able to restart it. After the restart the ssh worked ok again and I didn't find anything suspicious in the logs. Three days later the same problem occurs on this box again, and newly on yet another server in the cluster - 100% same symptoms. Both servers have about 2 month old installation of Debian Squeeze (6.0.2) and the problem never occurred before despite frequent ssh-ing, so it should not be a problem of settings. We haven't been installing anything new for quite some time now. I also made sure there is enough disk space on both servers. Since it started to happen all of a sudden on two servers at about the same time, I suspect some bug may have been introduced via Debian updates, yet I haven't been able to find anyone with the same problem. Most similar issues I have found: ssh freezes at the "Last Login Line" - in our case everything worked fine until recently, so nothing related to settings should be our problem. Diskspace checked, I couldn't check the memory but I would expect something would be in the logs if the system had been running out of it. Remote Fedora system unresponsive, odd but consistent behavior when trying to log in - problem with high load on the server; unlike in this case, nothing changes even if I wait for 10+ minutes

    Read the article

  • Changing time intervals for vSphere performance monitoring, and is there a better way?

    - by user991710
    I have a set of experiments running on a cluster node which is running ESXi 5.1, and I want to monitor the resource consumption on the node itself. Specifically, I am currently running experiments on a subset of the VMs on the ESXi host and wish to monitor resource consumption on those specific VMs. Right now, since I'm using only a single ESXi host, I am using vSphere to access it and the performance reports. Ideally, I would like to get these reports for different time intervals. I can already get the charts for a time interval of 1h, but these are rather long-running experiments and something like 2h, 3h,... would be preferable. However, I cannot seem to change the time interval. Here is an example of what my Customize Performance Chart dialog shows: I am also running on a trial key at the moment. How can I change this interval? Do I need a standard license, or do I just need to turn off the VM (unlikely, but I haven't attempted it yet as these are long-running experiments)? Any help (or pointers to documentation which deals with the above -- I've already looked but did not find much) would be greatly appreciated.

    Read the article

  • Deploying multiple identical copies of a virtual machine for compute tasks

    - by Reid
    I have a compute task which has a large number of library dependencies. I would like to deploy it on some of my company's large Linux clusters, where I do not have root. I could probably track down, compile, and install the right versions of all the libraries, but this looks to be quite tedious and would have to be repeated if I deployed it again somewhere else. On the other hand, it's pretty easy to install on current Ubuntu. This led me to wonder about a virtual machine approach. Could I put together a virtual machine which booted up, ran the computation (with parameters from and results to the host), and then shut down? In other words, I'd like a command like this that I could run on the host: $ ./run-vm --ram N --task /path/on/host/foo.sh --results /another/host/dir/ This would boot the VM, run foo.sh, and put the (relatively small) results of the computation in /another/host/dir/. It's important to start up many instances of the VM simultaneously, both on a single node and multiple nodes of the cluster. So it would be nice if I didn't have to make many copies of the VM virtual disk and metadata. As the task instances are completely independent, the VMs would not need any network support once deployed, or any outside communications beyond reading and writing the host filesystem. Is this possible, and if so, how might I go about doing it? Are there assumptions I've made above which are bogus?

    Read the article

  • How do you create an ssh key for the apache user on Redhat?

    - by Josh Smeaton
    As the question asks, how do I generate an ssh key for the user apache on Redhat? My use case, is that we have a mercurial server running under the apache user. We also have several web servers clustered that we need to log on to manually and do pulls from. Ideally, what we'd like to do is have the mercurial server push all changes to all the webservers in the cluster. To do this, we want to use ssh, as setting up http mercurial servers on each of the web servers seems like too much work, and far too heavy. What I've tried to do is the following: > sudo mkdir /var/www/.ssh > sudo chown -R apache:nobody /var/www/.ssh > su - apache -c "ssh-keygen -t rsa" This account is currently not available. I found the above commands elsewhere, but I can only assume that Redhat has differences to whatever distro was used for the above. Is there a way I can generate an ssh-key for the apache user?

    Read the article

  • NFS on top of GFS2 - does it work?

    - by Matthew
    We're currently using a NoSQL derivative called Splunk to receive our data. The software supports something called "search head pooling" in which the job-dispatching engine is housed on several servers which share a common storage point. Originally our intention was to use a clustered filesystem like GFS2 because of low latency, stability, and ease of setup. We set up GFS2, and it's working with no issues. However when trying to run the software, it's trying to create lock files, and a bunch of other things that their support team can't quite explain. Ultimate feedback from them was that they only support NFS. Our network administration team heavily frowns on NFS (lack of stability, file lock issues, etc). So, I was thinking about the possibility of setting up NFS on each server in the cluster to act as a wedge layer between the GFS2 filesystem and the software. Basically configure each server to export the GFS2 filesystem's mountpoint via NFS, and then tell each server to connect to that NFS share. That way we aren't introducing any single-points-of-failure should a dedicated NFS server go down, but the vendor gets their "required" NFS share. I'm just brainstorming ways around, so please tear this apart :)

    Read the article

  • How do I tell Websphere 7 about a front end load balancer so that re-directs are handled correctly?

    - by TiGz
    On WebLogic 11G I can use the console to set the FrontendHost and FrondendPort on a server or on a cluster so that re-directs are handled correctly and end up resolving to the front end load balancer instead of the local host. The MBeans associated with this on WebLogic are, for example: MBean Name com.bea:Name=AdminServer,Type=WebServer,Server=AdminServer Attribute Name FrontendHost Description The name of the host to which all redirected URLs will be sent. If specified, WebLogic Server will use this value rather than the one in the HOST header. Sets the HTTP frontendHost Provides a method to ensure that the webapp will always have the correct HOST information, even when the request is coming through a firewall or a proxy. If this parameter is configured, the HOST header will be ignored and the information in this parameter will be used in its place. Type java.lang.String Readable / Writable RW How is the same thing achieved under Websphere 7? Follow up info: So I have 2 use cases actually. One is that I have a web app running under WebSphere on host A on port 9002 and a LB running on host B at port 80, when I visit the home page of the app via the LB on http://hostb/app the app redirects my browser to http://hostb:9002/app and it 404's I think this is WebSphere's fault but I guess it could be the app's fault? The second is that the web app in question needs to send emails containing URls that the customer can click on to get back into the web app - obviously this needs to be via the LB. On WebLogic the app uses MBeans to derive the LB url and I was hoping to use a similar mechanism on WebSphere.

    Read the article

  • Windows Server 2008 R2 Standard to Enterprise Problems

    - by boburob
    A few months ago I setup a Citrix XenApp cluster running on Windows Server 2008 R2 Standard Edition using the temporary 180 day license key. Recently the company bought a Windows Server 2008 R2 Enterprise DataCenter license. This means I need to upgrade the Windows edition from Standard to Enterprise. I attach the disk to the VM and start the upgrade process through XenCenter, it runs through all checks and unpacks all Windows files and seems to create a Windows Setup partition, it then reboots and trys to boot into this partition and I get a blue screen telling me to CHKDSK the hard drive with the following error message: STOP: 0x0000007B As XenApp is already setup and working I really do not want to go down the route of rebuilding this server (as I already had to do this once down to issues with XenApp). The server did have 8GB of RAM assigned to it, I have tried reducing this down to 2GB's as I read this can cause an issue. Also I can boot back into the Windows Server 2008 R2 Standard partition without any problems. UPDATE I have managed to get round the urgency by re-arming the license, giving me another 180 day trial..but would be nice to work out why this is happening!

    Read the article

  • apache chokes after 300 connections

    - by john titus
    We have an apache webserver in front of Tomcat hosted on EC2, instance type is extra large with 34GB memory. Our application deals with lot of external webservices and we have a very lousy external webservice which takes almost 300 seconds to respond to requests during peak hours. During peak hours the server chokes at just about 300 httpd processes. ps -ef | grep httpd | wc -l =300 I have googled and found numerous suggestions but nothing seems to work.. following are some configuration i have done which are directly taken from online resources. I have increased the limits of max connection and max clients in both apache and tomcat. here are the configuration details: //apache <IfModule prefork.c> StartServers 100 MinSpareServers 10 MaxSpareServers 10 ServerLimit 50000 MaxClients 50000 MaxRequestsPerChild 2000 </IfModule> //tomcat <Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="600000" redirectPort="8443" enableLookups="false" maxThreads="1500" compressableMimeType="text/html,text/xml,text/plain,text/css,application/x-javascript,text/vnd.wap.wml,text/vnd.wap.wmlscript,application/xhtml+xml,application/xml-dtd,application/xslt+xml" compression="on"/> //Sysctl.conf net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 fs.file-max = 5049800 vm.min_free_kbytes = 204800 vm.page-cluster = 20 vm.swappiness = 90 net.ipv4.tcp_rfc1337=1 net.ipv4.tcp_max_orphans = 65536 net.ipv4.ip_local_port_range = 5000 65000 net.core.somaxconn = 1024 I have been trying numerous suggestions but in vain.. how to fix this? I'm sure m2xlarge server should serve more requests than 300, probably i might be going wrong with my configuration.. The server chokes only during peak hours and when there are 300 concurrent requests waiting for the [300 second delayed] webservice to respond. Please help..

    Read the article

  • Disk fragmentation when dealing with many small files

    - by Zorlack
    On a daily basis we generate about 3.4 Million small jpeg files. We also delete about 3.4 Million 90 day old images. To date, we've dealt with this content by storing the images in a hierarchical manner. The heriarchy is something like this: /Year/Month/Day/Source/ This heirarchy allows us to effectively delete days worth of content across all sources. The files are stored on a Windows 2003 server connected to a 14 disk SATA RAID6. We've started having significant performance issues when writing-to and reading-from the disks. This may be due to the performance of the hardware, but I suspect that disk fragmentation may be a culprit at well. Some people have recommended storing the data in a database, but I've been hesitant to do this. An other thought was to use some sort of container file, like a VHD or something. Does anyone have any advice for mitigating this kind of fragmentation? Additional Info: The average file size is 8-14KB Format information from fsutil: NTFS Volume Serial Number : 0x2ae2ea00e2e9d05d Version : 3.1 Number Sectors : 0x00000001e847ffff Total Clusters : 0x000000003d08ffff Free Clusters : 0x000000001c1a4df0 Total Reserved : 0x0000000000000000 Bytes Per Sector : 512 Bytes Per Cluster : 4096 Bytes Per FileRecord Segment : 1024 Clusters Per FileRecord Segment : 0 Mft Valid Data Length : 0x000000208f020000 Mft Start Lcn : 0x00000000000c0000 Mft2 Start Lcn : 0x000000001e847fff Mft Zone Start : 0x0000000002163b20 Mft Zone End : 0x0000000007ad2000

    Read the article

  • Mod_pagespeed, Varnish and Apache cache issues after new code pushes

    - by WerkkreW
    I have a rather strange issue. In my environment we are running a load balanced cluster of 8 apache servers with a master-master MySQL backend. In front of apache we have Varnish in the cache layer. We have been running Apache mod_pagespeed for several weeks now and for the most part it has been working great. The issue arises when we do fresh code updates from Git, and and/all of the JS/CSS assets change. Basically the problem appears to be two fold. One, after the code push we generally take the opportunity to flush varnish, restart apache, and restart varnish. In doing this all of the mod_pagespeed combinied/minified files are cleared out ensuring that all of the new JS/CSS assets are fresh. The problem is, upon doing this the file names that mod_pagespeed creates change, but the old files (appear) to be still cached for many people client side leading to very unexpected results. However, if we do not restart apache, the changes to the files may or may not appear client side due to the cached minified assets. The simple solution is to disable mod_pagespeed, however I would rather not do that as it has made a fairly large impact in performance. I feel as if there must be a better way to deal with the inconsistencies in cache between the client and server to prevent having people to go to great lengths or perform a large number of page refreshes to see a working page. I can provide configuration snippets if anyone needs them. If you would like to inspect the site, source, headers, or anything try the following addresses: http://wellplayed.org http://wellplayed.org/tv Thanks in advance!

    Read the article

  • Alias for Drupal "Sites" folder with Apache on Windows Server 2008

    - by sgtbeano
    I'm having to move a number of sites from a LAMP stack to a WAMP one, provided by Zend, and I've hit a problem. Our architecture is a number of loadbalanced web servers which have their own local webapp drives which are kept in sync with one server performing as the master copy. There is then a separate DFS share provided to all web servers from our pillar san. Usually a Drupal install under our LAMP cluster would have the main Drupal web app in a local HTDOCS mount for each server and the SITES directory within Drupal would then be symlinked out to the DFS or NFS share so that there is a common FILES and TMP directory. The problem I'm having is that there seems to be no equivalent of symlinks on Win Server 2008, shortcuts have a .ink at the end making Apache see them as a distinct file. So I've tried using an alias call in the vhost file like this; <Location /drupal-626/sites> Order deny, allow Allow from all </Location> Alias /drupal-626/sites "Z:\Path to alternate sites directory" The root for this test is; http://main-domain-url/drupal-626/ Unfortunately this isn't work so I'm wondering if any of you have a solution which would work? Many thanks for taking the time to read this.

    Read the article

  • Viability of Mac OS X 10.9 Time Machine Server in office environment

    - by user197609
    Currently we have about 20 Mac OS 10.9 MacBook Pros (almost all with SSDs) backing up to individual USB drives. I'd like to consolidate these to one drobo thunderbolt drive array attached to a Mac Mini server (running 10.9 server) using time machine server. My question is, will this scale to 20 users? Examples I have seen seem to be 5 or 6 users tops, and this isn't easy for me to test (I'd rather not ask everyone to backup to the array and then switch back to USB drives if it brings our network to its knees). My primary concern is saturating our gigabit network, as time machine backs up every hour for every machine, so there would usually be a couple people backing up at any given time. We also have some people occasionally on our 802.11ac network and not on ethernet (usually connected via 802.11n until people upgrade to newer machines), but most of the time people are connected to our thunderbolt displays which have a gigabit ethernet connection on them. Our network topology is one 32 port gigabit switch with 5 smaller gigabit switches at each desk cluster. The mac mini server is connected directly to the top level switch. Update: Failing information from someone who has done this in practice, I suppose my question is really around how switches work. If three or four people are backing up simultaneously, and then other two (different) users transfer a file between each other, will they be able to transfer the file at gigabit speeds?

    Read the article

  • Methods and practices for managing a network that has no internet connection

    - by FaultyJuggler
    Originally asked in Super User but realized this belongs here. Long story short, I am setting up a network with 32 servers of varying specs that will be used for testing and development. We will be using RedHat Linux, we also do not have a router as of yet and were looking into making one of the servers act as our router/DHCP etc. The small cluster will be on an isolated network with no internet. I can use external harddrives and discs to transfer anything from external sources into machines on the network, so this isn't a locked down secure network, it just won't have a direct connection to the outside world. I've worked on such setups before, but always long after they were setup. So I'm reaching out to see what everyone knows as far as how groups have handled initial setup and maintenance of such a situation. What is the best way to get them all configured and up to date? What are the best ways to automate updates, network wide installs, etc. With the only given that I have large multi-terabyte external hard drives that would be used to drop whatever files are needed onto a central server, how do i then distribute those files and install their contents? I've done perl scripting, some teammates have played with puppet, so we aren't completely in the dark, I just wanted to avoid reinventing the wheel since this is a common challenge.

    Read the article

  • Global Email Forwarding with EXIM?

    - by Dexirian
    Been trying to find a solution to this for a while without success so here i go : I was given the task to build a High-Availability Load-Balanced Network Cluster for our 2 linux servers. I did some workaround and managed to get a DNS + SQL + Web Folders + Mails synchronisation going between both. Now i would like my server 2 to only do mailing and server 1 to only do web hosting. I transfered all the accounts for 1 to 2 using the WHM built-in account transfert feature. I created 2 different rsync jobs that sync, update, and delete the files for mail and websites. Now i was able to successfully transfer 1 mail accounts from 1 to 2, and the server 2 works flawlessly. All i had to do was change the MX entries to point to the new server and bingo. Now my problem is, some clients have their mail softwares configured so that they point to oldserver.domain.com. I cant make the (A) entry of oldserver.domain.com point to the new server for obvious reasons. I thought of using .foward files and add them to the home directories of the concerned users but that would be very difficult. So my question is : Is there a way to configure exim so that it will only foward mails to the new server? I need to change all the users so they use their mail on server 2 without them doing anything. Thanks! EDIT : TO CLARIFY MY PROBLEM Some clients have their mail point to oldserver.xyz instead of mail.olderserver.xyz I want to know if i can do something to prevent modifying the clients configuration I would also like to know is there is a way to find out what clients aren't properly configured

    Read the article

  • Tungsten for MySQL: Online Schema Upgrade

    - by Jason
    In a recent presentation the Continuent folks claim that they support "daylight maintenance" or online schema upgrades. See Clustering for the Masses: A Gentle Introduction to Tungsten for MySQL, especially pages 22-28. Is anyone using Tungsten for MySQL in this way? It sounds too good to be true. I also wonder if the Community Edition supports all of the features discussed in the presentation. They say elsewhere that it is not crippleware but in their own productization table "Zero downtime upgrade" appears to only be available in the more advanced versions. So I'm skeptical. Community support seems rather non-existent so a commercial license with support is probably warranted (they do not disclose pricing). I have not contacted them directly yet as I prefer community vetting but this solution, despite its value proposition and power to make an admin's life easier just doesn't seem to get the kind of attention it might warrant. If not Tungsten for MySQL, how do you handle online schema upgrades? MySQL Cluster (NDBENGINE) is not well-suited to web applications. Cheers

    Read the article

  • Building a small server farm

    - by RayQuang
    Hi, I am planning to set up a Tech startup company that will provide web application solutions. Eventually we hope to diversify into different areas such as possibly social media or other services. For now we plan on running a high demand (from 1000 to 10,000 users in the first year) website running the application. This includes a MySQL database backend, email, and development servers. My question is then, what type of server arrangement will work best, that is ti say should i have a small cluster of ultra high power machines (E.G. Top of the range Xeons, with 12GB RAM) or will it be better to have more less powerful servers load balanced? Should I go for 1 - 2 u servers rack mounted ot would it be beter for it just to be tower servers for maintainability? Finally I would also like to know what kind of Internet and router i would need, I currently have 10mbit down and barely 1 mbit up, but soon our area will have a fiber optic connection with international speeds of up to 25 mbit / sec. Thanks in advance, RayQuang UPDATE: sorry I forgot to mention it, the platform that I will be using is PHP with the APC code cache, Probably running Debian.

    Read the article

  • Nginx as reverse proxy: how to properly configure gateway timeout?

    - by user1281376
    We have configured Nginx as a reverse proxy to an Apache server farm, but I'm running into trouble with the gateway timeouts. Our Goal in human readable form is: "Deliver a request within one second, but if it really takes longer, deliver anyway", which for me translates into "Try the first Apache server in upstream for max 500ms. If we get a timeout / an error, try the next one and so on until we finally succeed." Now our relevant configuration is this: location @proxy { proxy_pass http://apache$request_uri; proxy_connect_timeout 1s; proxy_read_timeout 2s; } [...] upstream apache { server 127.0.0.1:8001 max_fails=1 fail_timeout=10s; server 10.1.x.x:8001 max_fails=1 fail_timeout=10s backup; server 10.1.x.x:8001 max_fails=1 fail_timeout=10s backup; server 10.1.x.x:8001 max_fails=1 fail_timeout=10s backup; } The problem here is that nginx seems to misunderstand this as "Try to get a response from the whole upstream cluster within one second and deliver a 50X error if we don't - without any limit on how long to try any upstream server", which is obviously not what we had in mind. Is there any way to get nginx to do what we want?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >