Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 110/594 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Using runit and monit to run / monitor services

    - by murtaza52
    I am configuring some services to run on Ubuntu server. I was going through the link below where they use runit to run the services and monit to monitor the services - http://rubyworks.rubyforge.org/manual/monit.html http://rubyworks.rubyforge.org/manual/runit.html 1) The services are all started through monit. 2) Monit inturn starts them using runit. What is the advantage of using the above setup, where the services are run using runit via Monit. Why use runit in the middle, instead of directly starting them with monit?

    Read the article

  • Recurring network issues the same time every day.

    - by Peter Turner
    Something has been happening on my company's network at 9:30 every day. I'm not the sysadmin but he's not a ServerFault guy so I'm not privy to every aspect of the network but I can ask questions if follow up is needed. The symptoms are the following : Sluggish network and download speed (I don't notice it, but others do) 3Com phones start ringing without having people on the other end. We've got the following ports exposed to the public for a web server, a few other ports for communicating with our clients for tech support and a VPN. We've got a Cisco ASA blocking everything else. We've got a smallish network (less than 50 computers/vms on at any time). An Active Directory server and a few VM servers. We host our own mail server too. I'm thinking the problem is internal, but what's a good way to figure out where it's coming from?

    Read the article

  • SNMP HOSTMIB.MIB not loading?

    - by user11860
    Forgive me if the answer is something glaringly obvious but I just can't seem to get access to any OIDs under the HOST branch in SNMP. I've used an SNMP browser to inspect a few of my systems and none of them show a HOST branch under ISO.ORG.DOD.INTERNET.MGMT.MIB-2. Any thoughts as to why? I'm looking to monitor a few computer's hardware resources via SNMP and unfortuantely all such OIDs live under the missing HOST branch, Any thoughts?

    Read the article

  • Innotop and Monit to kill thread using too much resources

    - by pocesar
    Instead of restarting the whole MYSQL process, sometimes I just want to kill the offending thread instead of making everything go down. Usually the spike in CPU is when a bot is crawling the first pages of pagination of my site (over 70.000 paginated results, 45 items per page). Is there a way I could do this automatically using monit and innotop? I couldn't find relevant information on Google, that's why I'm asking here. If these two tools aren't par to the task, which ones should I use?

    Read the article

  • Linux NetSec/IDS Bridge

    - by Blackninja543
    What I am looking to make is a linux system that acts as a bridge. It simple forwards any data sent on one device over to the next device. It does not attempt to block incoming attacks or redirect any traffic. What it does to is perform an IDS role on the network. Any suspicious activity is logged and reported. Snort would be one such piece of software however I was wondering what other solutions and ideas the rest of the community has.

    Read the article

  • Setting up apache vhost for Icinga

    - by DKNUCKLES
    It's been a while since I've worked with Apache so please be kind - I'm also aware of this question but it hasn't been much help to me. I'd like to set up a simple vHost w/ Apache for my Icinga instance. Icinga is up and running and I can access it from x.x.x.x/icinga, however would like to be able to access it externally as well as internally. I have set up the /etc/hosts file and the following is my barebones vhost statement in httpd.conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /usr/share/icinga ServerName icinga.domain.com ErrorLog logs/icinga.com-error_log CustomLog logs/dummy-host.example.com-access_log common </VirtualHost> I also have the following in my .htaccess file <Directory> Allow From All Satisfy Any </Directory> An entry has been made for the instance in the Windows DNS server on my network, however when I try to access the site by URL I am greeted with Internal Server Error. Reviewing the /var/log/icinga.com-error_log I see the following entry. [Thu Dec 13 16:04:39 2012] [alert] [client 10.0.0.1] /usr/share/icinga/.htaccess: <Directory not allowed here Can someone help me spot the error of my ways?

    Read the article

  • Is (Ubuntu) Linux file copying algorithm better than Windows 7?

    - by Sarath
    Windows Copying is a real mess ever since Windows Vista. Even Microsoft claims they've improved the performance, from a user perspective, it's not quite visible. Even with single file the copying window appears too much time for 'Calculating' and then finishing the copy(Even after 100% completion some times the dialog remains active). At the same time, I was backing up some files in Ubuntu Linux. I felt it's really fast. Might be a feeling caused by faster UI updates. I read an informative post from Jeff Atwood few years back on Windows File Copying. but what my specific questions are Is (Ubuntu) Linux file performance is better than Windows-7? Are both algorithms, Windows and Linux is making use of multiple threads and pipelining mechanism to improve the speed? If yes, which one is better?

    Read the article

  • nagiosgraph new services not showing

    - by Eleven-Two
    I am using Nagios Core with Nagiosgraph and had only enabled graphing for cpu usage for a while. This worked fine, but now i wanted to add some more services (for example memory usage). The new services are not working (no rrd data is generated). The Nagiosgraph site only says "no data available" and I get no error in apache log, nagiosgraph.log or nagiosgraph-cgi.log. The new services are standard services (nsclient++ MEMUSE for example) and of course they are included in the map file. If I execute the checks manually, it shows also the perfdata. I added the services by enabling the "graphed-service" use. Did I miss something?

    Read the article

  • How to monitor IO svctm with every 5 mins frequency using nagios?

    - by sabya
    I want to collect samples of iostat's svctm, await every 5 mins from all of my servers and store them in nagios. I want to get the values for what is happening in every 5 minutes (not since boot time, iostat's first output gives values since boot time). How can I do it in nagios? EDIT The tps should NOT be calculated #of transactions happened since reboot divided by uptime. What I want is # of transferred happened in last X mins divided X*60.

    Read the article

  • i5+SSD or i7+HDD gives better performance [closed]

    - by Cas Sakal
    Which performs better for developers on a notebook(running VS.NET, SQL Server, no gaming); (A) i5 540M + SSD (Intel) or (B) i7 720M + 7200 RPM HDD (Western Digital) In short, I want to ask whether the performance difference between i5 540M and i7 720M can be compensated via using solid state drive instead of using a hard drive? Thank you, cas

    Read the article

  • system center operations manager (SCOM) role combinations

    - by KAPes
    We are evaluating system center operations manager 2007 R2 product, and would like to know what all roles can be combined onto single server. Like root management server and reporting servers can be on one box or not? my environment is like 450 servers mostly Exchange and Domain Controllers plus few OCS servers.

    Read the article

  • default page in pnp4nagios

    - by bluszcz
    I am using pnp4nagios along with the nagios. Everything seems to be integrated properly - I have "extra action" icons close to every host and service which links to pnp4nagios graph. However, when I am going to the https://x.x.x.x/pnp4nagios/ it always change the url to the: https://x.x.x.x/pnp4nagios/graph?host=webhost01 How Can I turn off this behaviour? I would like to see on /pnp4nagios/ collected graphs from all servers.

    Read the article

  • What would cause a query being ran from SSMS on local box to run slower then from remote box

    - by Racter
    When I run a simply query such as "Select Column1, Column2 from Table A" from within SSMS running on my production SQL Server the results seems to take extremely long (45Min). If I run the same query from my dev system’s SSMS connecting to the production SQL Server the results return within a few seconds (<60sec). One thing I have notices is if the system was just rebooted performance is good for a bit. It is hard to determine a time as I have had it start running slow very quickly after reboot but at most it performed good for 20min and then start acting up. Also, just restarting the SQL service does not resolve the issue or provide a temporary performance boost. Specs for Server are: Windows Server 2003, Enterprise Edition, SP2 4 X Intel Xeon 3.6GHz - 6GB System Memory Active/Active Cluster SQL Server 2005 SP2 (9.0.3239)

    Read the article

  • Amazon EC2 Elastic Load Balancing - strategy for zero downtime server restart

    - by Yoga
    I have 5 web servers (Apache/mod_perl) behind Amazon EC2 Elastic Load Balancing, when I deploy codes to the web servers, I am doing this.. For each machine, shutdown the Apache Update the code Start over the server and proceed to the next server I think when my server is shutdown, ELB will not distribute request to my server, but how about the request still serving? I think a better approach is Stop accepting new request from ELB Sleep for sometimes, shutdown web server only if all requests are responded Update the codes Start the server again But how to perform (1) and (2) from my local sever? Do I need to use AWS API? or other easy way to do it? Thanks.

    Read the article

  • log execution of certain commands on linux

    - by jlsksr
    I have to maintain a system (debian) on which several users are allowed to install programs - so I would like to log, for example, if anyone executes "apt-get install" or "apt-get purge", so I can keep track of manually installed packages.. I'm looking for a general way to achieve this; it's not just APT, but several programs/scripts etc. Any ideas? /edit a google-search with few different keywords brought up this: http://serverfault.com/questions/201221/how-to-log-every-linux-command-to-a-logserver http://stackoverflow.com/questions/15698590/how-to-capture-all-the-commands-typed-in-unix-linux-by-any-user http://sourceforge.net/projects/rootsh/

    Read the article

  • What kind of proxy acl rules should be applied?

    - by user42891
    I try to block sites in squid based on this article. Assuming you would want to block access to Yahoo (e.g http://www.yahoo.co.jp, http://www.yahoo.com, http://www.yahoo.co.in), you would ideally want to block all of the above URLs, if I use a regular expression and try to search something called yahoo it seems to get blocked. We are just interested in applying rules which would be most commonly used across all companies (e.g. social networking sites like facebook, orkut), porn sites (e.g. sex), gaming sites (games), movie & song download sites, and sites where they can upload data (e.g. rapidshare) What would be the common set of effective rules in achieving the above?

    Read the article

  • latency, regular alternations. Pfsense, network

    - by Tillebeck
    Any idea why grapgh is a shown? It i two pfsense boxes and I have not looked into where they ping to and if it is related to the server they ping. Trafic graphs are following a normal 24 hour cycle and not related to either of the latency graphs Img1: High frequent ulteration in latency. Just started a few days back. Peaks are not perfectly regular but varies from 40min to 1hour Img2: This is a different router on another internet connection. Most of our routers shown this kind of latency graphs - so for our setup it is "normal". Please note that second graph is for a whole week and not just some hours

    Read the article

  • laptop and it drastically reduces the performance of my machine as the Indexer is constantly running

    - by Sakthy
    I get the below error on my laptop and it drastically reduces the performance of my machine as the Indexer is constantly running. Please identify a solution except re-installation ? Faulting application name: SearchIndexer.exe, version: 7.0.7600.16385, time stamp: 0x4a5bcdd0 Faulting module name: TQUERY.DLL, version: 7.0.7600.16385, time stamp: 0x4a5bdb21 Exception code: 0xc0000006 Fault offset: 0x0002e5c2 Faulting process id: 0xbe0 Faulting application start time: 0x01cd0752bd78cce1 Faulting application path: C:\Windows\system32\SearchIndexer.exe Faulting module path: C:\Windows\system32\TQUERY.DLL Report Id: 16ce8a2f-7346-11e1-840a-a92a5ee507c3 EventID: 1000

    Read the article

  • Monitor status while using VNC

    - by kumar
    So after connecting to a vnc server via vnc viewer to my desktop (remotely), Is it possible to know whether the monitor connected to the CPU is switched ON or not. Simply put, from command prompt how do you know whether monitor is ON/OFF from command line. Here, Basically I am bit worried about privacy as my monitor can be viewed by anyone while accessed remotely. Any solution? Obviously there is a option to switch off the monitor while starting the vnc server at remote side but I am looking for a better solution to control monitor(possible??) remotely. Thanks!

    Read the article

  • Automatically reboot Windows8 if no internet activity

    - by GrapeCamel
    I have a media server located in a very inconvenient part of my house. Occasionally I will have to reset my router or it will reset itself. The issue is the PC loses connectivity for some reason, and I am forced to walk outside, around the house, into the basement, over a bunch of toys and weights and boxes, to push a button to reboot it. I would love to have it check itself every 5-10 minutes and auto reboot if it is unable to ping a given address/IP. Any ideas how to accomplish this?

    Read the article

  • mpstat on slackware 13.0 shows no utilization

    - by conartist6
    As the title says, the mpstat command, executed on Slack 13.0 continuously shows almost no processor utilization of any sort. In fact none of the output ever seems to change at all. The system is dual processor board with two hyperthreaded P4 Xeons. Any ideas? 08:50:06 PM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s 08:50:06 PM all 0.38 0.00 0.03 0.03 0.00 0.00 0.00 99.56 1510.46 08:50:06 PM 0 0.50 0.00 0.05 0.10 0.00 0.01 0.00 99.33 11.90 08:50:06 PM 1 0.32 0.00 0.03 0.01 0.00 0.00 0.00 99.64 0.00 08:50:06 PM 2 0.38 0.00 0.03 0.01 0.00 0.00 0.00 99.58 0.00 08:50:06 PM 3 0.29 0.00 0.02 0.00 0.00 0.00 0.00 99.68 0.00 This is, literally, the only output I can get from the program. No values change ever.

    Read the article

  • UNIX tool to dump a selection of HTML?

    - by jldugger
    I'm looking to monitor changes on websites and my current approach is being defeated by a rotating top banner. Is there a UNIX tool that takes a selection parameter (id attribute or XPath), reads HTML from stdin and prints to stdout the subtree based on the selection? For example, given an html document I want to filter out everything but the subtree of the element with id="content". Basically, I'm looking for the simplest HTML/XML equivalent to grep.

    Read the article

  • What's a good tool for checking from my own machine that my server is up?

    - by chico
    I'm looking for a good tool (web site or not) that I can use to do a simple check whether my web server is accessible from outside LAN (it's serving in a non-standard port). To give some context, I've gone through this problem: can't access my ip from outside. Even the tools I've found are not really working. Currently to fetch the html I serve with the online bash tool I do: curl <my ip>:<my port> \ | sed 's/&/\&amp;/g; s/</\&lt;/g; s/>/\&gt;/g; s/"/\&quot;/g; s/'"'"'/\&#39;/g' I'm looking for a simple tool that can display the html properly, or just show raw text without resorting to sed html escaping and curl.

    Read the article

  • gmetric data submitted doesn't follow dmax value

    - by 580farm
    I have a custom script that is querying a metric port for an application that I'm running and submitting parsed values to ganglia via gmetric. The script runs every minute, so I submit the data to ganglia using the following gmetric options: /usr/sbin/gmetric -g ec2 -s positive -t uint32 -d 600 -n "$NAME" -v $VALUE -x 60 But for some reason there are still gaps in the graphing data: Is there something in my formatting that is preventing the dmax/ttl of the last metric received from being honored? Is there anyone who does custom metric collection that has run into this problem before that can shed some insight or provide some tips as to how to best correct this?

    Read the article

  • Monitor File Changes On Windows System

    - by user10487
    I am looking for a utility that can take a snapshot of the files in directories that I am interested in and then compare that snapshot to the current state of the system and show me any files that have been added, changed, or deleted. Does anyone know of solutions that provide this functionality? Thanks, Nate

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >