Search Results

Search found 1556 results on 63 pages for 'confidence interval'.

Page 32/63 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Log monitoring using Zabbix

    - by Supratik
    I am trying to monitor a log file using Zabbix 1.8.4. I created an item using the following details: Host: Zabbix server Description: logger_test Type: Zabbix agent (active) Key: log[/tmp/scribetest/test3/test3_current,error,,100] Type of Infromation: Log Update interval (in sec): 1 sec Keep history (in days): 90 Status: Active Applications: Log files I created a trigger and attached it with the item logger_test using the following details: Name: logger_test_trigger Expression: {Zabbix server:log[/tmp/scribetest/test3/test3_current,error,,100].str(error)}=1 Severity: disaster The above settings works fine for the first time but next time the trigger shows ZBX_NOTSUPPORTED and after that item also shows "not supported" message. Can you please tell me if anything I am doing wrong here ?

    Read the article

  • Employee Tracking: Is there a similar software to Elance WorkView or oDesk "Team Room"?

    - by Kunal
    We are looking for a great commercial or free tool which can monitor all our remote employees and keep the reports centrally. We need it similar to Elance WorkView or oDesk "Team Room", what these tools do is: Take screenshots at random interval. Track the activity on computer based on key strokes. (not a necessity) It doesn't necessarily need to track time but will be good to have, our aim is to monitor employees and make sure they're working - that's all. I'll give oDesk Team Room 10/10 and I haven't been able to find such tool. Is there anyone who can suggest such tool? Thanks

    Read the article

  • Log monitoring using Zabbix

    - by Supratik
    Hi I am monitoring logs using Zabbix. I am trying to monitor a log file using Zabbix 1.8.4. I created an item using the following details: Host: Zabbix server Description: logger_test Type: Zabbix agent (active) Key: log[/tmp/scribetest/test3/test3_current,error,,100] Type of Infromation: Log Update interval (in sec): 1 sec Keep history (in days): 90 Status: Active Applications: Log files I created a trigger and attached it with the item logger_test using the following details. Name: logger_test_trigger Expression: {Zabbix server:log[/tmp/scribetest/test3/test3_current,error,,100].str(error)}=1 Severity: disaster The above settings works fine for the first time but next time the trigger shows ZBX_NOTSUPPORTED and after that item also shows "not supported" message. Can you please tell me if anything I am doing wrong here ? Warm Regards Supratik

    Read the article

  • How to Confirm working of Nginx Caching Proxy

    - by Mark
    I am having nginx on port 80 and apache on port 8080 on same server. I have configured nginx such that it act as reverse proxy(i am not sure whether its working or not) using this tutorial http://tumblr.intranation.com/post/766288369/using-nginx-reverse-proxy. steps i followed to verify proxy. opened same page on two different machines within an interval of 5 seconds. but in the apache access.log every request is showing 200 response code.Does that indicate that caching is not working? and nginx access.log is showing nothing.

    Read the article

  • Excel Pivot Tables -- Divide Numerical Column Data into Ranges

    - by ktm5124
    Hi, I have an Excel spreadsheet with a column called "Time Elapsed" that stores the number of days it took to complete a task. I would like to make a pivot table out of this spreadsheet where I divide the "Time Elapsed" column into ranges, e.g., how many tasks took 0 to 4 days to complete how many tasks took 5 to 9 days how many took 10 to 14 days how many took 15+ days Do I have to create new columns in my spreadsheet dedicated to each interval (0 to 4, 5 to 9, etc.) or can I use some feature of pivot tables to separate my one "Time Elapsed" column into intervals? Thanks in advance.

    Read the article

  • Strange noise from my laptop when it is closed

    - by gotqn
    I am fan of powerfull PCs with huge wide screen monitors but I have bought an laptop as gift to my parents. This is the description: "second-handed" lenovo (and there is a sign "ThinkPad" Intel Core Duo CPU T6570 2.10 GHz 64 bits Windows 7 Ultimate and the issue is that when the laptop is turn on and then closed, after a interval of time (I am not sure how much exactly but I believe it is several hours) a strange noise come from the laptop. The noise is something like a "start-up" noise of old computers. I first thought this is some hardware issue but everything seems to work fine. Has anyone an idea what can cause this type of noise or a what can I do in order to track the problem down? I am not sure if this is a important thing, but the laptop is always connected to the power. Note: I have just noticed that the sound is generated again when the laptop is closed and then opened. Note: This is link to the audio - http://yourlisten.com/channel/content/16934332/WindowsStrangeNoise

    Read the article

  • tail -f updates slowly

    - by Cliff
    I'm not sure why, but on my Macbook Pro running lion I get slow updates when I issue "tail -f" on a log file that is being written to. I used to use this command all the time at my last company but that was typically on Linux machines. The only thing I can think of that would possibly slow the updates are buffering of output and/or maybe a different update interval on a Mac vs. Linux. I've tried with several commands all which write to stout relatively quickly but give slow updates to the tail command. Any ideas? Update I am merely running a python script with a bunch of prints in it and redirecting to a file vi " my output.log". I expect to see updates near real time but that doesn't seem to be the case.

    Read the article

  • TFTP PUT Failing Across Hosts

    - by Jason
    I have a TFTP server installed on a CentOS host. /etc/xinetd.d/tftp: service tftp { disable = no socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -c -s /var/lib/tftpboot per_source = 11 cps = 100 2 flags = IPv4 } If I try to PUT a file from a remote host to the host running the TFTP server, I get Transfer Timed Out - however, it does create the file in /var/lib/tftpboot but the file is empty. If I tftp from the tftp server to itself (localhost) and PUT a file, it works fine. I have verified that SELinux is disabled and IPTables are turned off. I can connect from the remote hosts with no issue - just seems to be the PUT I have issue with: [root@SVR01 TEST]# tftp 10.100.2.15 tftp> status Connected to 10.100.2.15. Mode: netascii Verbose: off Tracing: off Literal: off Rexmt-interval: 5 seconds, Max-timeout: 25 seconds tftp>

    Read the article

  • Cron Script to Delete Folder Contents Every 5 Minutes on Media Temple

    - by Brian Iannone
    I'm not familiar with server-side scripting, but I'm currently using a PHP application on Media Temple to cache JPEGs from four webcams hosted on a server located in the middle of the Indian Ocean. (Hence my reason for caching them in the US.) The webcams are updated every five minutes. The PHP application stores the cached images in http://static.rigic.co/cache/. I would like to create a cron script to automatically delete the contents of "cache" (not the folder itself; just the files inside) at a regular interval.

    Read the article

  • Using PCIe and onboard together

    - by dave
    I'm currently using my PCI Express card to run 2 monitors and my onboard to run another. It works fine despite people saying it won't. I use a gadget to monitor my PCI card but would also like to monitor my onboard GPU as well. I play a game called Eve Online and when I'm running my accts I can only get 20 fps. Before, when I was using just my PCI card, I was getting 60+. To solve this, I set the display options to interval immediate, it was set to one before. My computer handles it no problem at all, but I was always told this was immpossible. I was hoping someone could explain how and why this is working. Thanks! My setup is: i7 2600k 16 GB RAM XFX HD6970 2GB

    Read the article

  • Can expire_logs_days be less than 1 day in MySQL?

    - by Scott
    So... yesterday I received an "after the fact email" about a campaign that has started for one of the services that I run. Now the DB server is getting hammered, hard, to the tune of about 300mb/min in binary logging for the replicate. As you could imagine, this is chewing up space at a fairly tremendous rate. My normal 7 day expiry of binary logs just isn't cutting it. I've resorted to truncating logs to just the last for 4 hours with(I'm verifying that replication is up to date with mk-heartbeat): PURGE MASTER LOGS BEFORE DATE_SUB( NOW(), INTERVAL 4 HOUR); I'm just running that from cron every few hours to weather the storm, but it made me question the minimum value for expire_logs_days. I haven't come across a value that is less than 1, but that doesn't mean that it isn't possible. http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_expire_logs_days gives the type as being numeric, but doesn't indicate if it's expecting integers.

    Read the article

  • My Laptop (HP/Compaq 2510p) running ubuntu 10.04 LTS keeps losing the WLAN connection.

    - by Ernelli
    I am using Wicd and can successfully connect to my ADSL router (Thomson TG 787) using WPA PSK. But with regular interval I lose the ability to connect to Internet. I can ping the GW and can actually ping servers on the Internet but not connect to them using HTTP (Tested with both Firefox and wget). I would suspect the router unless for the fact that the problem does not show up when running Windows XP on the same computer and also, when the problem arises, a simple disconnect/connect in Wicd solves the problem, which does not involve the router (Except for the DHCP request). I have searched Ubuntu forums without luck, most problems described relate to specific network drivers or other problems. Does anyone have the same experience with Linux/Ubuntu and WLAN?

    Read the article

  • Files copying between servers by creation time

    - by driftux
    My bash scripting knowledge is very weak that's why I'm asking help here. What is the most effective bash script according to performance to find and copy files from one LINUX server to another using specifications described below. I need to get a bash script which finds only new files created in server A in directories with name "Z" between interval from 0 to 10 minutes ago. Then transfer them to server B. I think it can be done by formatting a query and executing it for each founded new file "scp /X/Y.../Z/file root@hostname:/X/Y.../Z/" If script finds no such remote path on server B it will continue copying second file which directory exists. File should be copied with permissions, group, owner and creation time. X/Y... are various directories path. I want setup a cron job to execute this script every 10 minutes. So the performance is very important in this case. Thank you.

    Read the article

  • Thunderbird 3.0.2 stops automatically checking for email

    - by Chris
    Since upgrading to Thunderbird 3.0.2 on Mac OS X 10.6, Thunderbird will intermittently stop checking for messages automatically. Frequently I am presented with the error "This folder is being processed" when manually checking for mail. Restarting Thunderbird fixes the problem, but I have noticed hours can go by where auto checking isn't working, and restarting shows there were dozens of messages waiting to be delivered in that down time. Occasionally I will see "indexing 1 of 2 messages - 0% complete" in the status bar, it will sit there for a long time. Actions I've taken to fix: Deleted all .msf files in all accounts Removed News & Blogs account Staggered the "automatically check for mail" interval for each account Worth noting: after waking up the macbook, TB usually needs to be restarted to resume auto checking for mail. Has anyone experienced such trouble?

    Read the article

  • Finding throuput of CPU and Hardrive on Solaris

    - by Jim
    How do I find the throughput of a CPU and the hard disk on an OpenSolaris machine? Using mpstat or iostat? I'm having a hard time identifying the throughput if it is given at all in the commands output. For example, in mpstat there is very little explanation as to what the columns mean. I've been using the syscl column divided by time interval to find the throughput but to be honest I have no idea what a system call truly is. I'm trying to to analyze a hardrive and CPU while writing a file to the hardisk and when at rest.

    Read the article

  • Configuring memcached for a particular scenario

    - by pradeepchhetri
    I have a web application which queries opentsdb server(which in backend using Hbase cluster) for the datapoints of different metrics and using dygraph javascript graphing library, I am plotting those metrics. Since getting all the datapoints of past one day from opentsdb for a particular metric is itself taking nearly 2 seconds, my application which is plotting nearly 25 metrics is becoming very slow. In order to reduce this latency, I am thinking of using memcached module of php5 for caching all the queries. But I have few questions regarding memcached. Is there any way I can configure memcache to keep on updating its cache in the background by running some command line queries after particular interval of time. Is there any way I can configure memcache to always reply for a query using cache instead of first updating its cache because my application just plots datapoints for past one day. Missing out some datapoints is not that critical.

    Read the article

  • timeout duration on linux

    - by user1319451
    I'm trying to run a command for 5 hours and 10 minuts. I found out how to run it for 5 hours but I'm unable to run it for 5 hours and 10 minuts.. timeout -sKILL 5h mplayer -dumpstream http://82.201.100.23:80/slamfm -dumpfile slamfm.mp3 runs fine. But when I try timeout -sKILL 5h10m mplayer -dumpstream http://82.201.100.23:80/slamfm -dumpfile slamfm.mp3 I get this error timeout: invalid time interval `5h10m' Does anyone know a way to run this command for 5 hours and 10 minuts and then kill it?

    Read the article

  • Keep IMAP messages locally when deleted remotely

    - by user74328
    I use my email from my phone and my computer via IMAP. I want to set something up so that if I delete a message via my phone, my computer will still keep the message locally. For example, assume I leave my computer on, with a synchronize interval of 5 minutes. I want to be able to send something from my phone, wait 5 minutes to be sure my computer has downloaded the item from the Sent folder, then delete it from the IMAP sent folder via my phone, but have the computer at home keep it's copy. Is this possible with any readily available email clients out there? I have Thunderbird and Outlook at the moment, but would be willing to learn a new interface for this feature. How can I accomplish this?

    Read the article

  • Configurarion of Alert on Windows Server 2003

    - by Ferre06
    I'm trying to configure an alert on low space on disk in Windows Server 2003, I already followed this step by step tutorial of microsoft. I try to execute a bat file created by me, located on the home folder of the user I'm using. I seted to trigger when the free space is below 6 GB when the disk have lower free space than 6 GB, the "Sample data interval" is the default (5 seconds). The problem is that the alert isnt triggered. And another thing, the user that is seted for the alert isnt the root user, but It have administration privileges. Thanks in advance

    Read the article

  • Solaris kstat sdX disk nread counter value decreasing

    - by mykhal
    I get strange disk io nread (bytes read) counter values (from kstat) on Solaris. Example of collected nread value for sd6 disk collected in 30s interval (command kstat -n sd6): 768579416 768579416 768579416 768579416 768579416 768579416 768579416 768496080 768496080 768496080 768496080 768496080 768496080 768496080 768496080 768530896 768530896 768447560 768447560 768447560 One would suppose that the relative read bytes count can't be negative.. I wonder what can couse this situation and whether there is more reliable disk io data available. Some info about the system: machine:~ # uname -a SunOS machine 5.10 Generic_127112-11 i86pc i386 i86pc machine:~ # cat /etc/release Solaris 10 11/06 s10x_u3wos_10 X86 Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 14 November 2006

    Read the article

  • SQL Server 2005 transactional replication break before a configured number of retries

    - by ti2
    We have a SQL Server 2000 Standard database with some tables being replicated (continuous transactional replication) to dozens of SQL Server 2005 Express and MSDE computers. The step 2 of the replication agent job (Run agent) is configured by default to retry every 1 minute for 10 times if some problem ocurr. Because the client machines get shut down at night (they are POS machines), we changed the number of retries to 5760 (4 days), so replication would not be broken at night and would not need to be restarted manually. But the problem is that every other day we have at least one machine with broken replication, with this error: The process could not connect to Subscriber 'POS986'. NOTE: The step was retried the requested number of times (5760) without succeeding. The step failed. It seems that SQL Server is not respecting the number of retries or the interval between retries as we configured. PS: I have restarted the replication job after changing the number of retries from 10 to 5760.

    Read the article

  • Network location is not on the Domain

    - by Kyle Brandt
    I have a computer joined to the domain, but it doesn't view the network "location" as being part of the domain. I have tried removing and rejoining the domain and this doesn't help: Other computers in the same network don't have this problem. I have also tried several different icons, including both the train and the airplane which doesn't seem to make a difference. At least using nslookup, the server seems to have connectivity with the DCs in the same site. There also seem to be some errors that suggest a NULL domain: Computer: OR-WEB05.ds.stackexchange.com Description: NtpClient was unable to set a domain peer to use as a time source because of failure in establishing a trust relationship between this computer and the '' domain in order to securely synchronize time. NtpClient will try again in 3473457 minutes and double the reattempt interval thereafter. The error was: The trust relationship between this workstation and the primary domain failed. (0x800706FD)

    Read the article

  • External monitor turning black intermittently

    - by coding crow
    I have installed an external monitor (Dell ST2220L, 21.5 inch) on my laptop (Sony Vaio). I am using a DVI-D cable for connection. Since the laptop does not have a DVI-D port, I have connected a DVI-D to HDMI connector on the laptop end and inserted the cable in HDMI port of the laptop.. I have switched off the laptop display and adjusted the screen resolution on the Dell external monitor to 1920 x 1080 and adjusted colors for the Windows 7 and brightness and contrast from the monitor. The problem is the monitor turns of blank intermittently for 1-2 seconds and turns on again at random interval. What could be the reason for this and how to get rid of this problem?

    Read the article

  • Nagios state transition and event handler issue

    - by Dattatray
    We are using Nagios to check duplicate processes. define service { use local-service host_name xxx service_description xxx Duplicate Processes check_interval 1 max_check_attempts 1 contact_groups admins event_handler restart-dependent-processes check_command check_procs_duplicate!2!3!2!2!2 } check_procs_duplicate checks if there are any duplicate processes and returns the state - e.g. CRITICAL. The event handler kills the duplicate processes and it's dependent processes and starts one instance of the process and dependent process. At the end of this again Nagios checks if there are any duplicate processes and sets the state accordingly - OK/WARNING/CRITICAL. The event handler takes more time to start the processes and during this time if someone manually starts the process, the state will remain in CRITICAL itself. During the next interval, Nagios will again check for duplicate processes and it will find it again CRITICAL. The event handler will not get executed now, as the previos and current both the states are CRITICAL. Any pointers about how to fix this issue?

    Read the article

  • Drools flow architecture, Drools flow events with AND join nodes

    - by Shoukry K
    I have been evaluating a number of frameworks including jBPM and Drools flow for my application requirements. Lots of the opinions seem to be inclined towards Drools flow as its more flexible, knowledge oriented, easier to integrate with business rules, etc.. The application is some sort of an Email Campaign manager , where different customers can sign in, prepare (design) and launch email campaigns. The application should be able to do the following : 1- Receive a list of email addresses, send emails to each of these addresses starting from a certain date and during a certain time interval of the day , do some custom actions, and then wait for reply emails. 2- If a reply email is received and depending on the response text of the email , and depending on the time the email was received certain actions need to happen, web service calls need to take place, and error handling for these calls. 3- The application will manage and run many and different campaigns (different customers and different flows for each customer) at any point of time. The first question is : Is Drools flow the way to go about this? My main concerns are scalability, suspending, resuming flows, and long wait, and flows management. As you see from the requirements : There is a scheduling part : Certain flows need to be run at a certain point in time, they need to get suspended and then resumed. For example start sending emails starting on Dec 1st 2010 and send emails only in the time interval between 08:00 and 17:00 GMT. By then it might be that all subscribers have been sent emails, but it might not be the case, the process needs to (resume) on Dec 2nd and send a second batch, however certain (users) already received emails and they should be able to (continue at different stages of the flow) There are long wait states : Days or even weeks , i need to persist, suspend / resume and terminate flows (manage flows) External Events : This is where i got stuck first, i tried to put together a simple flow (see attached screenshot) See image http://img46.imageshack.us/img46/9620/workflowwithevents.png , there is a start node , connected to an action node, connected to a join. An event node is connected to a second action node, which is connected to the join. The join is an AND join , after the join there is an action and the end node. Here is the sample code i am using to launch the flow : KnowledgeBuilder builder = KnowledgeBuilderFactory .newKnowledgeBuilder(); builder.add(ResourceFactory.newClassPathResource("campaign.rf", CampaignsDroolsPoc.class), ResourceType.DRF); if (builder.hasErrors()) { KnowledgeBuilderErrors errors = builder.getErrors(); Iterator<KnowledgeBuilderError> iterator = errors.iterator(); while (iterator.hasNext()) { System.out.println(iterator.next().toString()); } } KnowledgeBase base = KnowledgeBaseFactory.newKnowledgeBase(); base.addKnowledgePackages(builder.getKnowledgePackages()); final StatefulKnowledgeSession ksession = base .newStatefulKnowledgeSession(); // KnowledgeRuntimeLoggerFactory.newConsoleLogger(ksession); ksession.getWorkItemManager().registerWorkItemHandler("Log", new SendSMSWorkItemHandler()); ProcessInstance startProcess = ksession.startProcess("flow"); System.out.println("Signaling event"); startProcess.signalEvent("ev1", "ev1"); System.out.println("Signaled"); ksession.fireUntilHalt(); I am noticing that the event get triggered, the action node connected to the event gets triggered, however things seem to get stuck at the join. The flow does not continue past the AND join and the flow seems to get stuck. The action following the node does not get triggered. I also went through the drools flow documentation , and all the example codes, however i didn't find anything there. In addition any hints about the way to go about architecting the solution, and implementing it would be great.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >