Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 103/594 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Using hdparm for better performance on Web Servers

    - by Rishav
    I just heard about using hdparams to optimize the Hard Disk Performance of a server ? Is this common practice ? What file systems do you use ? I generally deploy on the second last release of Ubuntu for stability reasons, do you some other filesystems or use distributed file systems from the get go ? Do the hdparam settings change for different File systems ? I haven't tried this yet, so how much difference do changes like this make ?

    Read the article

  • Linux users traffic measurement

    - by Claudiu
    I want to measure traffic(upload) made by each user on a linux system. Each users runs a rTorrent instance on a specified port. Also users could make traffic through the ftp server (vsftpd). Is there a tool that can monitor traffic for a specified port and for ftp users ?

    Read the article

  • Flexible traffic & bandwidth monitor

    - by BrNathan
    I have looked around, but have not found anything to meet our needs. I need something that can log all connections & bandwidth consumption. We need it for analysis: by protocol, source IP (& MAC if possible), destination, etc. Ideally we are looking for something that can produce custom graphs & also uses mysql. All connections go through one server on a bridged connection (2 network cards) so it is easy to pickup traffic. We are not concerned so much with internal LAN traffic as what passes in & out to the firewall. Thanks for you suggestions. Update: I use Ubuntu 10.04

    Read the article

  • How to see if turbo boost is working on I7 860 CPU?

    - by Jan Derk
    I just build myself a new system with a Intel I7 860 CPU. When loading it using a single threaded application like Super PI, CPU-Z shows 2.933Ghz as speed. Now I understood that the I7 goes into turbo boost mode up to 3.46GHz for a single core. How can I check that? Is there a utility to monitor CPU speed per core?

    Read the article

  • Identical traffic

    - by Walter White
    Hi all, I am running an application server and logging all requests for analysis purposes later. One interesting trend I noticed last night was, I had a visitor from Texas on FIOS share identical traffic with bluecoat in California. What would cause the traffic to be identical? For every request the visitor made, bluecoat made one subsequently within milliseconds of his request. If it is caching, why would there be identical requests? Wouldn't it go through the cache / proxy on their end, and I would only see the proxied request? I'm just curious, this is an interesting pattern that shows similarities of a DDoS attack, but with far fewer resources. Is it possible that the visitor had malware on their computer? Any other ideas? Walter

    Read the article

  • server attack monitor

    - by Basit
    we been getting some attacks on our server i think, cause our server gets down every day now.. i want to monitor what is causing server to go down or if there is any attack from some site or if its crawler doing the attack. is there any tool for this? if not, what should i do to find out what is causing the problem. Edited my server is linux i have cpanel control panel i haven't checked the logs i have done nothing to see whatis causing the problem thats why i came here to ask how can i find out what is causing the problem. there is guy from our server, he said its server ram, they told us to extend more ram, but there isnt many sites on it and not many load from that sites eaither, so i dont see why our 2gb ram is getting used at. so i want to find out :/

    Read the article

  • How can I force a MySQL table to become corrupted?

    - by Rory McCann
    I have written a simple Nagios plugin that calls mysqlcheck (which checks for corrupted tables) and will give a warning if any are corrupt. However none of my tables are corrupt now. So I'm not 100% sure that my plugin is working fine. I have a dev server that's not misson critical. How can I force one (or any) of the tables there to be corrupt so that I can test my nagios alert? For the record the server is Ubuntu Dapper and the mysql is version 5.0

    Read the article

  • What else is needed to get iptables to log into this file I created?

    - by anthony01
    I want to create the logging of iptables DROP's and intrusion attemps. First, I put --log-prefix "iptables: " at the end of every iptables rules in my iptables rules file. But this doesn't work, as it says there is a syntax error. So where should I put that command? (I would want to have it included in the saved rules file) Secondly, I created a file iptables.conf within /etc/rsyslog.d/, and I put the following inside of it: :msg, startswith, "iptables: " -/var/log/iptables.log & ~ I assume that at this stage, I'm supposed to restart the rsyslog daemon. What else is needed to do what I'm attempting? Thanks a lot

    Read the article

  • monitor network bandwidth via ssh

    - by ServerSideX
    I'm running a Centos 6.4 server with cPanel. WHM (admin side panel) shows about 100GB of bandwidth this month. However, the server's RTG shows 3.4TB last 30 days, 121GB past 24 hours alone. Doesn't make sense. I'm trying to trace the cause of this. It's a shared web hosting server for approximately 300 domains. I would appreciate help tracing this down somehow. I utilize CSF firewall and Configserver exploit scanner as well. Day http://s10.postimg.org/ti1qhj5mx/day.png Week http://s7.postimg.org/8ho8kds57/week.png

    Read the article

  • How do I determine the cause of a sustained spike in mysql queries/activity?

    - by mattmcmanus
    So this is more of a "I'm trying to learn about how this works" question rather than "there is a serious problem I can't figure out!" question. I'm setting up a VPS and have been tweaking and changing things here and there. I recently installed munin (like two days ago) and yesterday I noticed a significant increase in mysql activity. So now my curiosity is going crazy. How do I setup/access mysql's query log? I have about 5 databases on the server so I want to see which one is getting all the action. Is there anything else I can do to keep a better eye on what's going on? Here are the graphs. As you can tell, it's not that much activity at all but I'm just curious at the change. The sites that are on the server right now do not get a lot of traffic. It's running a couple drupal sites, only one of which is live. The live one hasn't had a spike in traffic and the last spike was 250 visitors so it's barely a spike at all.

    Read the article

  • How to monitor the temperature of a HP Procurve 3500 switch via SNMP

    - by Murali Suriar
    I am attempting to poll the temperature of an HP ProCurve 3500YL switch remotely using SNMP. Looking at this MIB, it appears that the following OIDs: hpCpuTemperature 1.3.6.1.4.1.11.2.3.7.11.17.7.1.1.1.6 hpPowerSupplyTemperature 1.3.6.1.4.1.11.2.3.7.11.17.7.1.1.1.7 hpChassisTemperature 1.3.6.1.4.1.11.2.3.7.11.17.7.1.1.1.8 Within the 'hpProcurveSysMib' should provide the data I need. However, whenever I attempt to access these OIDs, I receive the response: SNMPv2-SMI::enterprises.11.2.3.7.11.17.7.1.1.1.6 = No Such Object available on this agent at this OID Further investigation reveals that the switch in question does not appear to implement the parent hpProcurveSystem MIB: SNMPv2-SMI::enterprises.11.2.3.7.11.17.7.1.1 = No Such Object available on this agent at this OID Does anyone know of an alternative MIB implemented by the 3500 that will allow its temperature to be polled automatically?

    Read the article

  • How can I see which applications have accessed a certain file within a given time period on Linux?

    - by Nikolaidis Fotis
    Is it possible on Linux to find out which applications have accessed a certain file in the last 24 hours? I've come with a few possible solutions: Watch lsof. It works, but it's constrained to watch's granularity. inotify sounds good... but no information of the application accessing the file is provided. auditd may be useful, but I haven't checked that yet. What ways can I see which applications have accessed a certain file within a given time period?

    Read the article

  • Nagios3: Conditional operators for service checks?

    - by Dave
    I'm trying to setup Nagios to monitor my various using hostgroups to define 'machine roles', against which I run services to check the machines by role. However, I'd like to use conditional operators that would enable me to run the service check against an intersection of two host groups, rather than their unions... i.e. using &&, ||, or () operators. For example, imagine I have the following servers: www-eu: Linux WWW (Apache) server, in the EU www-us: Windows WWW (IIS) server, in the US (West coast) ftp-eu: Linux FTP server, in the EU ftp-us: Windows FTP server, in the US I would want to create the following host groups: US-Servers: www-us, ftp-us EU-Servers: www-eu, ftp-eu WWW-Servers: www-us, www-eu FTP-Servers: ftp-us, ftp-eu Now say I'm interested in checking the HTTP response time for my web servers. Then let's say this particular Nagios service is running from the US (West Coast), and that I have a command called *check_http_response_time*. This command will check the responsiveness of the HTTP server, which I can provide an argument which defines the max response time before raising critical. My command might look like: check_http_response_time $HOSTNAME$ 50 Now traditionally, I can run my checks by specifying a list of host or hostgroups. define service{ use local-service hostgroup_name WWW-Servers # Servers = www-us, www-eu servicegroups WWW Checks service_description Check HTTP Response Time check_command check_http_response_time!50 } However, with the above service definition, given my Nagios service is in US West, I could reasonably expect that my EU server will return critical. Really, I want different thresholds for each region (50 for US West, 200 for EU.) I would have to permutate my service for each host and set their custom threshold, or alternatively permutate out my service groups by role & region (i.e. WWW-Servers-EU), and run my specific thresholds against those. Though the latter is better, both are much messier than I'd like... What I would love, and what this post is asking for, is a way to use hostgroups to perform an intersection using conditional logic, rather than a simple union. It might look like: define service{ use local-service hostgroup_name WWW-Servers && US-Servers servicegroups WWW Checks service_description Check HTTP Response Time check_command check_http_response_time!50 } It then would run the check only against servers that are in both WWW-Servers and US-Servers, in my example, just www-us. The benefits of such a feature would be significant for Nagios services configured for large-scale. Is this feature available? If it isn't, will it be available in the future? Is there an alternative way to accomplish this given the most recent Nagios version? Any tips/suggestions are most appreciated! Dave

    Read the article

  • an unknown ip on network

    - by Ahmed safan
    In our office we have many PCs, all of them have static IP addresses. We had a problem with one server with ip 192.168.1.10 dropping off the network occasionally. I unplugged the network cable from the server and from pinged 192.168.1.10 from another host and there was a response. I searched all PCs to see if any has such ip but i didn't found a one. I changed the server ip to fix the problem, but I still find this rogue device using 192.168.1.10 on the network -- how can I figure out what it is? Could it be the ip of virtual machine on someone's PC?

    Read the article

  • Stripping a spike from rrdtool when removespike.pl doesn't find any

    - by raccettura
    I know when rrdtool graphs (using rrdtool 1.4) network traffic and the host is restarted a spike is a pretty normal thing to see. In the past I've just run that removespike.pl script that is hosted by the author and it strips the spike and I'm good to go. The last few times I've rebooted removespike.pl finds no spikes, but it's obvious that there are spikes. So my question is, how can I easily remove these spikes and get my graphs usable again? Right now it's so skewed it's meaningless.

    Read the article

  • Can someone explain RAID-0 in plain English?

    - by Edward Tanguay
    I've heard about and read about RAID throughout the years and understand it theoretically as a way to help e.g. server PCs reduce the chance of data loss, but now I am buying a new PC which I want to be as fast as possible and have learned that having two drives can considerably increase the perceived performance of your machine. In the question Recommendations for hard drive performance boost, the author says he is going to RAID-0 two 7200 RPM drives together. What does this mean in practical terms for me with Windows 7 installed, e.g. can I buy two drives, go into the device manager and "raid-0 them together"? I am not a network administrator or a hardware guy, I'm just a developer who is going to have a computer store build me a super fast machine next week. I can read the wikipedia page on RAID but it is just way too many trees and not enough forest to help me build a faster PC: RAID-0: "Striped set without parity" or "Striping". Provides improved performance and additional storage but no redundancy or fault tolerance. Because there is no redundancy, this level is not actually a Redundant Array of Inexpensive Disks, i.e. not true RAID. However, because of the similarities to RAID (especially the need for a controller to distribute data across multiple disks), simple strip sets are normally referred to as RAID 0. Any disk failure destroys the array, which has greater consequences with more disks in the array (at a minimum, catastrophic data loss is twice as severe compared to single drives without RAID). A single disk failure destroys the entire array because when data is written to a RAID 0 drive, the data is broken into fragments. The number of fragments is dictated by the number of disks in the array. The fragments are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, increasing bandwidth. RAID 0 does not implement error checking so any error is unrecoverable. More disks in the array means higher bandwidth, but greater risk of data loss. So in plain English, how can "RAID-0" help me build a faster Windows-7 PC that I am going to order next week?

    Read the article

  • Is there a small business router that shows bandwidth usage graphs in the admin panel?

    - by Robert Drake
    I support a large number of public libraries that are having their networks upgraded in response to a grant application. These libraries are generally home to between 6-15 computers and have little or no tech services either onsite or contracted remotely. In order to justify current and future purchases, a number of the libraries have requested routers that can provide bandwidth usage graphs that they can show to their managing boards. Is there a small business router that displays traffic graphs in the router administration web interface? The router needs to suppport DHCP and basic firewalling. No other features are required. Further, the reports just need to show overall trends. It is not necessary to show traffic by IP, by protocol/application, or by time of day. They just need an overall week to week, month to month, trend line. I'm familiar with MRTG/PRTG/tools that collect SNMP data from the router, but the libraries don't have the expertise for the configuration. I've considered installing the tomato firmware on some cheap home/home office routers, but if there's a commercial product that can be purchased that would be significantly simpler. Also the library boards would be much more likely to approve the purchase of a commercial product over a 'hacked' one. Any assistance would be appreciated.

    Read the article

  • Python Django sites on Apache+mod_wsgi with nginx proxy: highly fluctuating performance

    - by Halfgaar
    I have an Ubuntu 10.04 box running several dozen Python Django sites using mod_wsgi (embedded mode; the faster mode, if properly configured). Performance highly fluctuates. Sometimes fast, sometimes several seconds delay. The smokeping graphs are al over the place. Recently, I also added an nginx proxy for the static content, in the hopes it would cure the highly fluctuating performance. But, even though it reduced the number of requests Apache has to process significantly, it didn't help with the main problem. When clicking around on websites while running htop, it can be seen that sometimes requests are almost instant, whereas sometimes it causes Apache to consume 100% CPU for a few seconds. I really don't understand where this fluctuation comes from. I have configured the mpm_worker for Apache like this: StartServers 1 MinSpareThreads 50 MaxSpareThreads 50 ThreadLimit 64 ThreadsPerChild 50 MaxClients 50 ServerLimit 1 MaxRequestsPerChild 0 MaxMemFree 2048 1 server with 50 threads, max 50 clients. Munin and apache2ctl -t both show a consistent presence of workers; they are not destroyed and created all the time. Yet, it behaves as such. This tells me that once a sub interpreter is created, it should remain in memory, yet it seems sites have to reload all the time. I also have a nginx+gunicorn box, which performs quite well. I would really like to know why Apache is so random. This is a virtual host config: <VirtualHost *:81> ServerAdmin [email protected] ServerName example.com DocumentRoot /srv/http/site/bla Alias /static/ /srv/http/site/static Alias /media/ /srv/http/site/media WSGIScriptAlias / /srv/http/site/passenger_wsgi.py <Directory /> AllowOverride None </Directory> <Directory /srv/http/site> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> Ubuntu 10.04 Apache 2.2.14 mod_wsgi 2.8 nginx 0.7.65 Edit: I've put some code in the settings.py file of a site that writes the date to a tmp file whenever it's loaded. I can now see that the site is not randomly reloaded all the time, so Apache must be keeping it in memory. So, that's good, except it doesn't bring me closer to an answer... Edit: I just found an error that might also be related to this: File "/usr/lib/python2.6/subprocess.py", line 633, in __init__ errread, errwrite) File "/usr/lib/python2.6/subprocess.py", line 1049, in _execute_child self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory The server has 600 of 2000 MB free, which should be plenty. Is there a limit that is set on Apache or WSGI somewhere?

    Read the article

  • PNP4Nagios, nagiosgraph, separate Cacti, or something else for Nagios trending

    - by Matt
    I've been using Nagios for a while now and recently started using Cacti after being dissatisfied with the lack of scaling and lack of any GUI in MRTG. I'm interested in adding trending to my Nagios installation and wondered what was the best route to go. I've looked around a bit and have seen what's available, but there's not a lot of information around to differentiate them from each other. My Nagios install has about 250 hosts and 1100 service checks, but many of them are just simple network devices and there's only about 20 servers and 300 services associated with them. All servers but 2 are running Windows Server 2003. What are the main highlights of PNP4Nagios vs. nagiosgraph, or would I be better off using some sort of tool to convert the data to RRD form and just view it directly in Cacti? Is there a completely different direction I could go that would be even better? Please comment if you need any more information, I tend to be too wordy and tried to keep this question brief. Thanks!

    Read the article

  • Automatically Kill/Restart Process(es) When Memory is Critically Low

    - by nemesisfixx
    I have a Debian Wheezy VPS box where am running a couple of Django apps in production. Ideally, would have tried addressed my current memory footprint issues by optimizing the apps, adding more RAM or augmenting with Swap. But the problem is that I doubt there's much memory optimization I'd milk from optimizing the Django apps (the stack being open-source and robust), and adding RAM is a cost constraint for me (this is a remote VPS), also, the host doesn't offer options to use Swap! So, in the meantime (as I wait to secure more resources to afford more RAM), I wish to mitigate the scenarios where the server runs out memory so that I just have to request a VPS restart (as in, at that point, I can't even SSH into the box!). So, what I would love in a solution is the ability to detect when a process (or generally, total system memory usage) exceeds a certain critical amount (for now, example the FREE RAM falls to say 10%) - which I've noticed occurs after the VPS's been up for long, and when also traffic is suddenly much to some of the heavy apps (most are just staging apps anyway). So, I wish to be able to kill/restart the offending process(es) - most likely Apache. Which solution when done manually in these situations has restored sane memory usage levels - a hint that possibly one or more of the Django apps has a memory leak? In brief: Monitor overall system RAM usage When FREE RAM falls below a given critical threshold (say below 10%), kill/restart the offending process(es) - or simpler, if we assume from my current log analysis (using linux-dash) that Apache is often the offender, then kill/restart it. Rinse and repeat...

    Read the article

  • Dependencies issue while installing Graphviz 2.28

    - by M. Saâd
    I want to install this packages for Nagvis : graphviz-2.28.0-1.el6.i686.rpm graphviz-doc-2.28.0-1.el6.i686.rpm graphviz-gd-2.28.0-1.el6.i686.rpm graphviz-graphs-2.28.0-1.el6.i686.rpm graphviz-perl-2.28.0-1.el6.i686.rpm But while installing, i have this error : # rpm -ivh graphviz-2.28.0-1.el6.i686.rpm erreur: Dépendances requises: libgdkglext-x11-1.0.so.0 est nécessaire pour graphviz-2.28.0-1.el6.i686 libglut.so.3 est nécessaire pour graphviz-2.28.0-1.el6.i686 libgtkglext-x11-1.0.so.0 est nécessaire pour graphviz-2.28.0-1.el6.i686 libgts-0.7.so.5 est nécessaire pour graphviz-2.28.0-1.el6.i686

    Read the article

  • My server is slower than the average user's computer, should I still offload Access queries to SQL Server? [closed]

    - by andrewb
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I have a database set up with MS Access 2007 front ends and an SQL Server 2005 back end. At the moment, all the queries are saved in the front end as I've only recently moved to an SQL Server backend. I'm wondering how much of those queries I should save as stored procedures/views on SQL Server. About the system The number of concurrent users is only a handful, though it could be as high as 25 at one time (very unlikely). The average computer has an Intel i3-2120 CPU running at 3.3 GHz, which gets a PassMark score of 3,987, whilst the server has an Intel Xeon E5335 running at 2.0 GHz, which gets a PassMark score of 2,637. Always an awkward situation when an i3 outperforms a Xeon... though the i3 is from Q1 2011 and the Xeon is Q2 2009. There is potential for a server upgrade in the future, though it wouldn't come easy. I'm inclined to move the queries to the back end, as they are beginning to take noticeable time and I figure that is a better way of doing things. I like the idea of throwing everything at the server, then pushing for a server upgrade. It makes more sense in my mind to be upgrading one server rather than 30 PCs. Or am I being overzealous? Why my question isn't a duplicate It seems that my question has been misinterpreted and labelled a duplicate of quite a different question, one about testing and capacity planning. I'll try explain how my question is very different from the linked question. The crux of my question is something like "Even though my server is technically slower, is it better to have it doing more of the queries?" There's two ways that people could have answered this: I agree the server is going to be slower, but the extra benefits of such and such (like the less Access the better) means you should move most to the server anyway. (OR no it doesn't outweigh the benefit, keep them in Access) Actually the server will be faster because of such and such. I'm hoping that people out there could provide some answers like this, and the question in the dupe link doesn't really provide either of these answers. Ok sure, I suppose I could do extensive performance testing to compare Access queries running on a local machine to SQL Server queries running on the server, but that sounds like a very hard task (particularly performance testing of access) compared to someone giving some quick general guidance, and again, my question is looking for a lot more than immediate performance benefit.

    Read the article

  • Web traffic/activity monitor

    - by DealingWithAConcernedMom
    Is there a good free monitor that will allow me to monitor my son's internet activity. I'm specifically looking for history of websites visited and time spent on the site. I think that he is visiting sites that his mom would not approve of but is erasing the history etc.

    Read the article

  • Nagios - NagWin - Send notification with gmail

    - by Attila Bujáki
    I would like to send Nagios notifications using my gmail account. I have already set up my hosts I want to monitor and services also. What is the most simple way to accomplish this using NagWin on a Windows Server 2012 installation? As far as I know I must change some of these configuration settings: # 'notify-host-by-email' command definition define command{ command_name notify-host-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /bin/blat - -to $CONTACTEMAIL$ -f nagios@localhost -subject "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" -server ??? } # 'notify-service-by-email' command definition define command{ command_name notify-service-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /bin/blat - -to $CONTACTEMAIL$ -f nagios@localhost -subject "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" -server ??? } What should I use for smtp server? Is it possible to directly send my notifications to the Gmail server?

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >