Search Results

Search found 3701 results on 149 pages for 'cost threshold for parall'.

Page 74/149 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • What makes C faster than Python?

    - by Chris
    I know this is probably a very obvious answer and that I'm exposing myself to less-than-helpful snarky comments, but I don't know the answer so here goes. If Python compiles to bytecode at runtime, is it just that initial compiling step that takes longer? If that's the case wouldn't that just be a small upfront cost in the code (ie if the code is running over a long period of time, do the differences between C and python diminish?)

    Read the article

  • Windows CE 6.0 time setting in registry being overrided

    - by JaminSince83
    I have asked this question on stack overflow but its probably better suited here. So I have a Motorola MC3190 Mobile Barcode scanning device with Windows CE 6.0. Now I want to get the device to sync its date/time on boot up with our domain controller using a registry file that I have created. I have used this registry file below to get close to what I require. REG 1 REGEDIT4 [HKEY_LOCAL_MACHINE\Services\TIMESVC] "UserProcGroup"=dword:00000002 "Flags"=dword:00000010 "multicastperiod"=dword:36EE80 "threshold"=dword:5265C00 "recoveryrefresh"=dword:36EE80 "refresh"=dword:5265C00 "Context"=dword:0 "Autoupdate" = dword:1 "server" = "NAMEOFMYSERVER" "ServerRole" = dword:0 "Trustlocalclock" = dword:0 "Dll"="timesvc.dll" "Keep"=dword:1 "Prefix"="NTP" "Index"=dword:0 [HKEY_LOCAL_MACHINE\nls] "DefaultLCID" = dword:00000809 [HKEY_LOCAL_MACHINE\nls\overrides] "LCID" = dword:00000809 [HKEY_LOCAL_MACHINE\Time] @ = "UTC" "TimeZoneInformation"=hex:\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 [HKEY_LOCAL_MACHINE\Time Zones] @ = "UTC" [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Clock] "AutoDST" = dword:00000000 Now it gets the correct date and shows the time zone correctly however the time is always 5 hours behind on Eastern Standard Time, which is really annoying. I have researched heavily into this and this question has been asked before here As you will see I have copied what it suggests but it doesnt work. Something is overiding the time which I dont understand enough about to resolve. I cannot find any other setting to get it to set the time correctly. Any help would be greatly appreciated.

    Read the article

  • Nagios returns "No output returned from plugin" running process

    - by user56291
    I have a nagios server and a bunch of nagios clients that i currently monitor. All the clients are setup with the following nrpe configuration. check_users, check_load... metrics are successfully displayed on the nagios interface but check_nginx and check_server_proxy displayed as "Unknown"-(No output returned from plugin). As far as i understood nagios simply runs ps command and looks for either the argument strings or the name of the command to verify whether the service is running. Also with -c flag, one can give nagios a threshold to determine the output (ie: -c 1 returns 'OK' for if it finds at least 1 process.) nrpe_local.cfg: ###################################### # Do any local nrpe configuration here ###################################### allowed_hosts =127.0.0.1,10.0.2.181 command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_all_disks]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 command[check_swap]=/usr/lib/nagios/plugins/check_swap -w 50% -c 25% command[check_server_proxy]=/usr/lib/nagios/plugins/check_procs -c 1 -a "api-v1/server.js" command[check_nginx]=/usr/lib/nagios/plugins/check_procs -c 1:30 -C nginx nagios_server.cfg ... define host{ use generic-host ; Name of host template to use host_name plum alias plum address 10.0.2.88 check_command check-host-alive-by-ssh } ... #Check api-proxy-server define service{ use generic-service host_name plum service_description check api proxy service check_command check_nrpe!check_server_proxy } define service { use generic-service ; Name of service template to use host_name plum service_description CHECK_NGINX check_period 24x7 max_check_attempts 3 normal_check_interval 5 retry_check_interval 3 check_command check_nrpe!check_nginx notifications_enabled 1 } Also when i run the command on the nagios client: /usr/lib/nagios/plugins/check_procs -c 1 -a "api-v1/server.js" I get the desired output PROCS OK: 1 process with args 'api-v1/server.js' I would really appreciate any pointers that might help me solve why it nrpe command does not return the desired output on the nagios server panel.

    Read the article

  • Finding all IP ranges blelonging to a specific ISP

    - by Jim Jim
    I'm having an issue with a certain individual who keeps scraping my site in an aggressive manner; wasting bandwidth and CPU resources. I've already implemented a system which tails my web server access logs, adds each new IP to a database, keeps track of the number of requests made from that IP, and then, if the same IP goes over a certain threshold of requests within a certain time period, it's blocked via iptables. It may sound elaborate, but as far as I know, there exists no pre-made solution designed to limit a certain IP to a certain amount of bandwidth/requests. This works fine for most crawlers, but an extremely persistent individual is getting a new IP from his/her ISP pool each time they're blocked. I would like to block the ISP entirely, but don't know how to go about it. Doing a whois on a few sample IPs, I can see that they all share the same "netname", "mnt-by", and "origin/AS". Is there a way I can query the ARIN/RIPE database for all subnets using the same mnt-by/AS/netname? If not, how else could I go about getting every IP belonging to this ISP? Thanks.

    Read the article

  • Is there any way to customize the Windows 7 taskbar auto-hide behavior? Delay activation? Timer?

    - by calbar
    I'm becoming increasingly frustrated with the way Windows 7 handles showing a hidden taskbar. It's incredibly over-eager to pop out and obscure what I'm really trying to interact with, requiring me to move the mouse away, wait for it to auto-hide again, then resume what I was doing but more deliberately. After closely examining the behavior, it appears that a hidden taskbar "peeks out" from the edge by 2 or 3 pixels, and slowly moving your mouse into this area activates it; you don't even need to touch the edge of the screen. I would love it if there was a way to customize or change this behavior. Ideally, the taskbar would only pop out if you are actively "pushing" the edge of the screen it is hidden on. So activation only occurs once you've reached the screens edge and continue to move the mouse past a customizable threshold. Alternatively, a simple activation delay would suffice as well. So only if the mouse remains in that 2-3 pixel area (a.k.a. on the taskbar) for greater than a customizable amount of time does it pop out again. This would only be a fraction of a second. Often times the cursor simply "careens" off the edge of the screen while trying to focus on something nearby. Anyway, if there are any registry settings or utilities that can achieve either of these effects, that would be great! Thanks for your help.

    Read the article

  • o2cb thinks ocfs2 cluster is still online, and refuses to shut down

    - by Kendall
    I have a handful of OpenSuSE 11.2 servers that utilize OCFS2 volumes. I've noticed that o2cb can't figure out when the OCFS2 cluster is actually mounted. For example, when I try to shutdown o2cb, after stopping OCSF2, o2cb refuses to shutdown because it thinks OCFS2 is still up! After stopping OCFS2 I try to stop o2cb... hamguy:/dev/disk/by-label # /etc/init.d/o2cb stop Stopping O2CB cluster ocfs2: Failed Unable to stop cluster as heartbeat region still active So I check the status... hamguy:/dev/disk/by-label # /etc/init.d/o2cb status Driver for "configfs": Loaded Filesystem "configfs": Mounted Stack glue driver: Loaded Stack plugin "o2cb": Loaded Driver for "ocfs2_dlmfs": Loaded Filesystem "ocfs2_dlmfs": Mounted Checking O2CB cluster ocfs2: Online Heartbeat dead threshold = 31 Network idle timeout: 30000 Network keepalive delay: 2000 Network reconnect delay: 2000 Checking O2CB heartbeat: Active And double check OCFS2... hamguy:/dev/disk/by-label # /etc/init.d/ocfs2 status Configured OCFS2 mountpoints: /u/conf /u/logs /u/backup /u/client /u/data /u/mdata OCFS2 is clearly down, while o2cb clearly thinks otherwise. The versions of OCFS2 and o2cb are... kendall@hamguy:~> rpm -qa |grep ocfs2 ocfs2console-1.4.1-25.6.x86_64 ocfs2-tools-o2cb-1.4.1-25.6.x86_64 ocfs2-tools-1.4.1-25.6.x86_64 kendall@hamguy:~> rpm -qa |grep o2cb ocfs2-tools-o2cb-1.4.1-25.6.x86_64 What causes this, and is there a way around it? If I try to reboot the machine, it will just sit there forever until your physically power cycle it. That obviously is a bit of a problem. Any insight is appreciated, thank you. Kendall

    Read the article

  • Group Policy GPO not 'seen' at client

    - by fukawi2
    I have a new OU (natorg.local\NATO\Users) that I am trying to apply GP to. I have created a new user in this OU, and linked the 3 GPO's to this OU: DESKTOP - Folder Redirection (AppData) DESKTOP - Folder Redirection (Desktop) DESKTOP - Folder Redirection (Documents) Hopefully the names are sufficient to suggest what they do exactly. The settings are under User Settings so there is no Loopback processing required (if my understanding is correct). GP Modelling for the user and specific computer says that the GPOs will/should be applied, however on the client, gpresult doesn't even appear to see the GPOs under either "Applied" or "Not Applied": USER SETTINGS -------------- CN=Amir,OU=Users,OU=NATO,DC=natorg,DC=local Last time Group Policy was applied: 25/06/2012 at 11:07:13 AM Group Policy was applied from: svr-addc-01.natorg.local Group Policy slow link threshold: 500 kbps Applied Group Policy Objects ----------------------------- LAPTOPS - Power Settings WSUS - Set Server Address OUTLOOK - Auto Archive SECURITY - Lock Screen After Idle Default Domain Policy DESKTOP - Regional Settings NETWORK - Proxy Configuration NETWORK - IE General Config OFFICE - Trusted Locations OFFICE - Increase Privacy OUTLOOK - Disable Junk Filter DESKTOP - Disable Windows Error Reporting DESKTOP - Hide Language Bar NETWORK - Disable Skype DESKTOP - Disable Thumbs.db Creation WSUS - Set Server Address The following GPOs were not applied because they were filtered out ------------------------------------------------------------------- Local Group Policy Filtering: Not Applied (Empty) NETWORK - Google Chrome Configuration Filtering: Not Applied (Empty) SYSTEM - Event Log Configuration Filtering: Not Applied (Empty) SECURITY - Local Administrator Password Filtering: Not Applied (Empty) NETWORK - Disable Windows Messenger Filtering: Not Applied (Empty) SECURITY - Audit Policy Filtering: Not Applied (Empty) WSUS - Automatic Install Filtering: Not Applied (Empty) NETWORK - Firewall Configuration Filtering: Not Applied (Empty) DESKTOP - Enable Offline Files Filtering: Not Applied (Empty) I haven't altered permissions on the GPO's at all, no WMI filtering... As I said, GP Modelling says that they should be applied. GPResult on the client correctly identifies itself as being the correct OU (CN=Amir,OU=Users,OU=NATO,DC=natorg,DC=local) There are 2 x 2008R2 and a 2003 DC, domain is 2003 level, client is Windows XP SP3. Can anyone suggest why these GP Objects would be "invisible" to the client?

    Read the article

  • OS/X 10.6 Bizarre login bug: Making alternative "Others..." appear. Why does this happen?

    - by bjornl
    I am studying at NUS in Singapore, and they have a mac-equipped computer lab here at school. All users (students) have our own personal accounts that we use to log in to the computers with. Sometimes when you approach a computer to log in only the alternative "thinkmac", which is the school's administrator account, I presume. Some other computers have the alternative "thinkmac" as well as "Others..." where you can input your own login credentials. One day as I sat down by a computer and there was only the "thinkmac" alternative. I was about to get up and find another one when the guy sitting next to me says - Just click 'thinkmac' - the computer will ask for your password - then hit escape to get back to the login screen. Repeat until "Others..." appear. So: If you click any user account, hit ESC to get taken back to the login screen, repeat for 5-10x, eventually the alternative "Others..." will appear. Why is this? Is there an internal counter that keeps track on how many times you have clicked a/any given user account, and after a certain threshold it displays the "Others"? What is the logical reasoning behind this?

    Read the article

  • Will disabling hyperthreading improve performance on our SQL Server install

    - by Sam Saffron
    Related to: Current wisdom on SQL Server and Hyperthreading Recently we upgraded our Windows 2008 R2 database server from an X5470 to a X5560. The theory is both CPUs have very similar performance, if anything the X5560 is slightly faster. However, SQL Server 2008 R2 performance has been pretty bad over the last day or so and CPU usage has been pretty high. Page life expectancy is massive, we are getting almost 100% cache hit for the pages, so memory is not a problem. When I ran: SELECT * FROM sys.dm_os_wait_stats order by signal_wait_time_ms desc I got: wait_type waiting_tasks_count wait_time_ms max_wait_time_ms signal_wait_time_ms ------------------------------------------------------------ -------------------- -------------------- -------------------- -------------------- XE_TIMER_EVENT 115166 2799125790 30165 2799125065 REQUEST_FOR_DEADLOCK_SEARCH 559393 2799053973 5180 2799053973 SOS_SCHEDULER_YIELD 152289883 189948844 960 189756877 CXPACKET 234638389 2383701040 141334 118796827 SLEEP_TASK 170743505 1525669557 1406 76485386 LATCH_EX 97301008 810738519 1107 55093884 LOGMGR_QUEUE 16525384 2798527632 20751319 4083713 WRITELOG 16850119 18328365 1193 2367880 PAGELATCH_EX 13254618 8524515 11263 1670113 ASYNC_NETWORK_IO 23954146 6981220 7110 1475699 (10 row(s) affected) I also ran -- Isolate top waits for server instance since last restart or statistics clear WITH Waits AS ( SELECT wait_type, wait_time_ms / 1000. AS [wait_time_s], 100. * wait_time_ms / SUM(wait_time_ms) OVER() AS [pct], ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS [rn] FROM sys.dm_os_wait_stats WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE', 'SLEEP_TASK','SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR','LOGMGR_QUEUE', 'CHECKPOINT_QUEUE','REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH', 'BROKER_TASK_STOP','CLR_MANUAL_EVENT','CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT','XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN')) SELECT W1.wait_type, CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s, CAST(W1.pct AS DECIMAL(12, 2)) AS pct, CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) - W1.pct < 95; -- percentage threshold And got wait_type wait_time_s pct running_pct CXPACKET 554821.66 65.82 65.82 LATCH_EX 184123.16 21.84 87.66 SOS_SCHEDULER_YIELD 37541.17 4.45 92.11 PAGEIOLATCH_SH 19018.53 2.26 94.37 FT_IFTSHC_MUTEX 14306.05 1.70 96.07 That shows huge amounts of time synchronizing queries involving parallelism (high CXPACKET). Additionally, anecdotally many of these problem queries are being executed on multiple cores (we have no MAXDOP hints anywhere in our code) The server has not been under load for more than a day or so. We are experiencing a large amount of variance with query executions, typically many queries appear to be slower that they were on our previous DB server and CPU is really high. Will disabling Hyperthreading help at reducing our CPU usage and increase throughput?

    Read the article

  • Automatically Kill/Restart Process(es) When Memory is Critically Low

    - by nemesisfixx
    I have a Debian Wheezy VPS box where am running a couple of Django apps in production. Ideally, would have tried addressed my current memory footprint issues by optimizing the apps, adding more RAM or augmenting with Swap. But the problem is that I doubt there's much memory optimization I'd milk from optimizing the Django apps (the stack being open-source and robust), and adding RAM is a cost constraint for me (this is a remote VPS), also, the host doesn't offer options to use Swap! So, in the meantime (as I wait to secure more resources to afford more RAM), I wish to mitigate the scenarios where the server runs out memory so that I just have to request a VPS restart (as in, at that point, I can't even SSH into the box!). So, what I would love in a solution is the ability to detect when a process (or generally, total system memory usage) exceeds a certain critical amount (for now, example the FREE RAM falls to say 10%) - which I've noticed occurs after the VPS's been up for long, and when also traffic is suddenly much to some of the heavy apps (most are just staging apps anyway). So, I wish to be able to kill/restart the offending process(es) - most likely Apache. Which solution when done manually in these situations has restored sane memory usage levels - a hint that possibly one or more of the Django apps has a memory leak? In brief: Monitor overall system RAM usage When FREE RAM falls below a given critical threshold (say below 10%), kill/restart the offending process(es) - or simpler, if we assume from my current log analysis (using linux-dash) that Apache is often the offender, then kill/restart it. Rinse and repeat...

    Read the article

  • Batch deletion of smaller files from group of files via unix command line

    - by artlung
    I have a large number (more than 400) of directories full of photos. What I want to do is to keep the larger sizes of these photos. Each directory has 31 to 66 files in it. Each directory has thumbnails, and larger versions, plus a file called example.jpg I dispatched the example.jpg file easily with: rm */example.jpg I initially thought that it would be easy to delete the thumbnails, but the problem is they are not consistently named. The typical pattern was photo1.jpg and photo1s.jpg. I did rm */photo*s.jpg but it ended up some of the files named photoXs.jpg were actually larger and not smaller. Argh. So what I want to do is scan each directory for filesize and delete (or move) the thumbnails. I initially thought I'd just ls -R every file and extract the size of each file and save those under a threshold. The problem? In one directory the large will be 1.1 MB and the thumb is 200k. In another the large is 200k and the small 30k. Even worse, the files really are mostly named photo1.jpg - so simply putting them all in the same folder, sorting by size, and deleting in groups would not work without renaming already, and if it's possible I'd prefer to keep them in their folders. I was almost resolved to just doing this all manually, but then thought I'd ask here. How would you do this task?

    Read the article

  • IBM Server Config questions

    - by Joel Coel
    I have a few questions on a potential server setup. First, the situation: Last year we bought an IBM x3500 server with 2 Xeon E5410's, 9GB RAM, 6 HDDs. The original intent for this server was to replace the old exchange e-mail server. It was brought in, set up, and then shortly after we switched to gmail. Shortly after that my predecessor left for greener pastures, and finally I was hired. So this nice server is now sitting (mostly) idle. This year I have budget again for one server, and of course I want to put this other server to work. I'm thinking about the best use for the two server, and I think I finally have a plan for what I want to do with them. The idea is to use the two newer servers as a pair of VM hosts. I will set up each server with the same 8 VMs, but divide up the load so that only 4 are active per physical host. That means I've normally got 2GB RAM + 2 cores per host. I've done some load testing to pick out what servers to convert to virtual, and chose them so that each host will be capable of handling the entire set of 8 by itself in a pinch with 1 core and 1GB RAM, but would be very taxed to do so. This should take our data center from 13 total servers down to 7. The "servers" I'm replacing are mostly re-purposed desktops, so I'm more than happy to be able to do this. Now it's time to go shopping for the new server. I'd like my two hosts to match as closely as possible, and so I'm looking at IBM again. It also helps that we have some educational matching grant money from IBM that I need to use to help pay for this system (we're a small private college). So finally, (if you're not bored already), we come to my questions: Am I missing anything big or obvious in this plan? I'm a little worried about network performance since the VM hosts will only have 4 nics total where 8 used to be, but I don't think it will be a problem. Is there anything else like this I might be overlooking? Am I making it even too complicated? IBM no longer has a good analog to last year's server. If I want to match the performance (8 cores, 9GB RAM, 1333mhz front side bus, 6 spindles), I have to spend quite a bit more than we paid last year: $2K+, or nearly a 33% cost increase. This only brings a marginal increase in performance. The alternative to stay in budget is to take a hit on the fsb down to 800mhz or cut the number of cores in half, neither of which is attractive. The main cost culprit is the processor. IBM no longer offers the E5410. It's listed as a part, but not available in any of the server configs I've looked at. I'm considering getting the cheapest 800mhz fsb dual core xeon I can configure and then buying the E5410's separately. That's still an extra $350 I wasn't counting on, but that's better than $2K. I want to know what others think of this - will it work or will I end up with the wrong motherboard or some other issue? Am I missing a simple way to configure the server I really want? I don't really intend to do this, but one option to save some money back is to omit the redundant power supply. Since my redundancy plan for these system is to switch over to a completely different host, the extra power isn't fully necessary. That said, it's still very helpful to avoid even short downtimes while I switch over VMs. Has anyone done this?

    Read the article

  • AD User Passwords expiring without any notifications?

    - by scooter133
    We setup password Policies in Active Directory to Expire peoples passwords after so many days. Well it looks like the time has come for the Expiration of the Passwords and people are getting locked out... There has been no warning of user passwords about to expire. They just come in to work and they cannot log in, the phones no longer connect, nothing. Reset the password and all is good. Some of the users are locked out, though most are not, they just cannot log in. On setting the password Expiration, I didn't see anything about nor warning the users of the impending expiration. Seems like it used to warn you 15 days or so before it would expire. Clients range from: WinXP, WinVista, Win7 and Server 2008R2 Remote Desktop Services. How can I make sure my users are warned of the Expiration? Resultant Set of Policy for User that was not prompted: Account Policies/Password Policy Policy Setting Winning GPO Enforce password history 10 passwords remembered Default Domain Policy Maximum password age 270 days Default Domain Policy Minimum password age 0 days Default Domain Policy Minimum password length 4 characters Default Domain Policy Password must meet complexity requirements Disabled Default Domain Policy Store passwords using reversible encryption Disabled Default Domain Policy Account Policies/Account Lockout Policy Policy Setting Winning GPO Account lockout duration 20 minutes Default Domain Policy Account lockout threshold 5 invalid logon attempts Default Domain Policy Reset account lockout counter after 15 minutes Default Domain Policy Local Policies/Audit Policy Policy Setting Winning GPO Audit account logon events Failure Default Domain Policy Audit account management Success, Failure Default Domain Policy Audit directory service access Success, Failure Default Domain Policy Audit logon events Failure Default Domain Policy Audit policy change Success, Failure Default Domain Policy Audit privilege use Failure Default Domain Policy Local Policies/Security Options Interactive Logon Policy Setting Winning GPO Interactive logon: Prompt user to change password before expiration 7 days Default Domain Policy

    Read the article

  • Formatting an external HDD stuck at 70%

    - by mahmood
    My external HDD which is a 250GB WD (powered by USB) seems to have problem! Whenever i try to copy some files, it stuck while copying. I decided to format it. So I used windows tool and performed the format (not quickly) however at nearly 70% it stuck. Then I decided to perform a low level format with lowlevel. Again it stuck at 70%. I endup that the HDD has bad sector. So is there any tool that mark the bad sectors and bypass them? It is not very reasonable to through 250GB because of some bad sectors! P.S: I saw a similar topic but there were no conclusion there either. The smart data is Attribute, raw value, value, threshold, status Read Error Rate, 50, 200, 51, OK Spin-Up Time, 3275, 154, 21, OK Start/Stop Count, 2729, 98, 0, OK Reallocated Sectors Count,0, 200, 140, OK Seek Error Rate, 0, 100, 51, OK Power-On Hours (POH), 1057, 99, 0, OK Spin Retry Count, 0, 100, 51, OK Recalibration Retries ,0, 100, 51 , OK Power Cycle Count, 1385, 99, 0, OK Power-off Retract Count, 425, 200, 0, OK Load /Unload Cycle Count,12974, 196, 0, OK Temperature, 43, 43, 0, OK Reallocation Event Count,0, 200, 0, OK Current Pending Sector Count,23,200, 0, Degradation Uncorrectable Sector Count, 0, 100, 0, OK UltraDMA CRC Error Count,6, 200, 0, OK Write Error Rate/Multi-Zone Error Rate,0,100,51, OK It seems that the most important thing is this line Current Pending Sector Count,23,200, 0, Degradation Any idea on that?

    Read the article

  • NTDS Replication Warning (Event ID 2089)

    - by Chris_K
    I have a simple little network with 3 AD servers in 2 sites. Site A has Win2k3 SP2 and Win2k SP4 servers, site B has a single Win2k3 SP2 server. All have been in place for at least 3 years now. Just last week I started getting Event 2089 "not backed up" warnings (example below) on both of the win2k3 servers. I understand what the message means, no need to send me links to the technet article explaining it. I'll improve my backups. What I'm more curious about is why did I just start getting this message now? Why haven't I been getting it for the past 3 years?!? Perhaps this is related: I recently decommissioned a few other sites and AD controllers (there used to be 3 more sites, each with their own controller). Don't worry, I did proper DCpromo exercises and made sure we didn't lose anything. But would shutting those down possibly be related to why I get this error now? This won't keep me awake at night but I am curious as to what changed... Event Type: Warning Event Source: NTDS Replication Event Category: Backup Event ID: 2089 Date: 3/28/2010 Time: 9:25:27 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: RedactedName Description: This directory partition has not been backed up since at least the following number of days. Directory partition: DC=MyDomain,DC=com 'Backup latency interval' (days): 30 It is recommended that you take a backup as often as possible to recover from accidental loss of data. However if you haven't taken a backup since at least the 'backup latency interval' number of days, this message will be logged every day until a backup is taken. You can take a backup of any replica that holds this partition. By default the 'Backup latency interval' is set to half the 'Tombstone Lifetime Interval'. If you want to change the default 'Backup latency interval', you could do so by adding the following registry key. 'Backup latency interval' (days) registry key: System\CurrentControlSet\Services\NTDS\Parameters\Backup Latency Threshold (days) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • check_snmp warning & critical thresholds with negative values

    - by Oesor
    I'm querying some signal level values measured in dBm, and the SNMP host on the remove device reports the values as negative values, ie, -90 dBm. However, check-snmp seems to be incapable of dealing with negative numbers as part of its threshold values. If I specify the values as part of a collection of OIDs, it accepts the syntax but converts the snmp value to positive, thus always generating a WARNING/CRITICAL result: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::AverageReceiveSNR.0,DEVICE-MIB::CurrentNoiseFloor.0 -w 10:,~:-85 -c 15:,~:-80 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::AverageReceiveSNR.0 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::AverageReceiveSNR.0 = INTEGER: 25 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::AverageReceiveSNR.0 response: = INTEGER: 25 Processing line 2 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP CRITICAL - 25 *97* | DEVICE-MIB::AverageReceiveSNR.0=25 DEVICE-MIB::CurrentNoiseFloor.0=97 If I run it with a single OID, it gives me an error that the format is incorrect: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -w ~:-85 -c ~:-80 -vvvv Range format incorrect And if I run it with no thresholds defined, it works properly and returns the right value. This makes the graphs correct, however it'll never generate a notification when out of range: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP OK - -97 | DEVICE-MIB::CurrentNoiseFloor.0=-97 What am I doing wrong here? How would I, for example, generate a CRITICAL when the noise floor is -80 dBm or higher, a WARNING when it's -85 to -80 dBm, and an OK when -85 dBm or lower? Do I have to write my own SNMP plugins when dealing with negative values?

    Read the article

  • What does this mean: "SATP VMW_SATP_LOCAL does not support device configuration"?

    - by Jason Tan
    Can anyone tell me what this means in ESXi 5.1?: SATP VMW_SATP_LOCAL does not support device configuration I've googled it and I get a lot of results, but as yet all the pages that contain the string are discussing other matters. The storage array is a HDS HUS-VM and the hosts are HP b460c G8 blades with flex fabric and flex fabric VCs which I am in the process of commissioning and would like to get it started on the right foot - i.e. error and warning free! naa.600508b1001c56ee3d70da65f071da23 Device Display Name: HP Serial Attached SCSI Disk (naa.600508b1001c56ee3d70da65f071da23) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba0:C0:T0:L1;current=vmhba0:C0:T0:L1} Path Selection Policy Device Custom Config: Working Paths: vmhba0:C0:T0:L1 Is Local SAS Device: true Is Boot USB Device: false This is the same LUN: ~ # esxcli storage core device list -d naa.60060e80132757005020275700000016 naa.60060e80132757005020275700000016 Display Name: HITACHI Fibre Channel Disk (naa.60060e80132757005020275700000016) Has Settable Display Name: true Size: 204800 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.60060e80132757005020275700000016 Vendor: HITACHI Model: OPEN-V Revision: 5001 SCSI Level: 2 Is Pseudo: false Status: degraded Is RDM Capable: true Is Local: false Is Removable: false Is SSD: false Is Offline: false Is Perennially Reserved: false Queue Full Sample Size: 0 Queue Full Threshold: 0 Thin Provisioning Status: unknown Attached Filters: VAAI_FILTER VAAI Status: supported Other UIDs: vml.020001000060060e801327570050202757000000164f50454e2d56 Is Local SAS Device: false Is Boot USB Device: false ~ #

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • Exchange Mail Flow

    - by Tuck918
    Hello. I have a question. We have one Exchange 2003 server and two Exchange 2007 servers. Most all of our mailboxes are on 2007 but we do still have one shared mailbox, unity mailbox and a journling mailbox on 2003. Public Folders have been set to replicate to 2007. I have set up a send connector on 2007 with a cost of 1. Receive connectors have Anonymous Users checked on 2007. On 2003 there are two connectors: the Internet Email connector and the connector that connects 2003 to 2007. We have a SPAM filtering device that email goes through before it is handed off to Exchange. The SPAM filtering device is set to send email to one of our Exchange 2007 servers. Here is my question/problem: Even though the SPAM filtering device is set to forward email to Exchange 2007, somehow all of our email is still going through the Exchange 2003 server before it finally hits the users mailboxes on the Exchange 2007 server. How can I change it so that all email goes directly to Exchange 2007 and never routes through Excahnge 2003 both ways, inbound and outbound? Would also like to add: In the EMC under Org- Hub- Send Connector there are two connectors. One is the "Internet Connector" from the 2003 box and the other is the new one I created. THe address space on the 2003 one is set to a cost of 2, no smart hosts and the 2003 box is listed as the Source Server. THe other Send Connector has an address space of 1, no smart host and has the 2 excahnge 2007 servers listed as the source servers. In EMC under Server- Hub- my two exchange 2007 servers are listed. Each one has 2 receive connectors. Both Recieve Connectors are setup the same way. THe Default Receive Connector has Anonymous Users checked. The other Recieve Connector is labled "Client" and I am not sure what it does or why its there. Anonymous Users are not checked. No smart hosts configured on 2003. Additional details Currently we have 3 excahnge servers. One exchange 2003 server and two excahnge 2007 servers. THe exchange 2003 server is the acting "bridgehead" serverand all email is routing through this server, inbound and outbound. We are wanting to decommission this server and use our two exchange 2007 servers as our mailbox servers. All of of user mailboxes are already on one of the exchange 2007 boxes and we want to put whats left on the exchange 2003 box on our other excahnge 2007 box. Both excahnge 2007 servers are currently CAS, HT and MB servers. We have a SPAM filtering device that sits between our excahnge servers and the firewall and have it configured to send messages to one of the excahgne 2007 servers but when we look at the message headers we can see that messgaes are still being routed to the excahnge 2003 box. We want to bypass the exchange 2003 in the routing process as it is dying and is starting to have major issues so everytime it goes down our email is down. Is there possible some sort of AD routing link/site link stuff going on?

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • SBD killing both cluster nodes when there are even small SAN network problems

    - by Wieslaw Herr
    I am having problems with stonith SBD in a openais-based cluster. Some background: The active/passive cluster has two nodes, node1 and node2. They are configured to provide an NFS service to users. To avoid problems with split-brain, they are both configured to use SBD. SBD is using two 1MB disks available to the hosts via an multipath fibre-channel network. The problems start if something happens with the SAN network. For example, today one of the brocade switches got rebooted and both nodes lost 2 out of 4 paths to each disks, which resulted in both nodes committing suicide and rebooting. This, of course, was highly undesirable because a) there were paths left b) even if the switch would be out for 10-20 seconds a reboot cycle of both nodes would take 5-10 minutes and all NFS-locks would be lost. I tried increasing the SBD timeout values (to 10sec+ values, dump attached at the end), however a "WARN: Latency: No liveness for 4 s exceeds threshold of 3 s" hints that something isn't working as I would it expect to. Here is what I would like to know: a) Is SBD working as it should killing nodes when 2 paths are available? b) If not, is the multipath.conf file attached correct? The storage controller we use is an IBM SVC (IBM 2145), should there be any specific configuration for it? (as in multipath.conf.defaults) c) How should I go about increasing the timeouts in SBD attachements: Multipath.conf and sbd dump (http://hpaste.org/69537)

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • Recommend a free temperature-monitoring utility for cores + video card, on Vista?

    - by smci
    Looking for your recommendations for a free temperature-monitoring utility, for my PC (Core 2) and graphics card for Vista. (Question reposted with the hyperlinks now I have 10 reputation). I don't want all the geeky details, I don't overclock, I don't see the need to mess with my fan speeds or motherboard settings, I just want something fairly basic to help with basic troubleshooting of intermittent overheats on video card and/or mobo: must run on Windows Vista (yes, don't laugh). ideally displays temperature when minimized to toolbar, and/or: automatically alerts me when temperature on either core or the video card exceeds a threshold ideally measures temperature of video card and system as well, not just the cores. HDD temperature is not necessary I think. logging is nice, graphs are also nice portability to Linux and Mac is nice Apparently Everest is the best paid option, but I'm not prepared to spend $40. I found the following free options, but no head-to-head at-a-glance comparison: CoreTemp (only does cores, not video card?) Open Hardware Monitor (nice graphs, displays when minimized to toolbar, no alerts) RealTemp (has alerts, works minimized, lightweight install) HWMonitor (no alerts, CNET: "[free version is] simple but effective") from CPUID CPUCool (not free: 21-day trialware, then $18) SpeedFan from Almico (too geeky, detail overload; CNET: "most users won't be able to make head or tail of the data this utility provides") Motherboard Monitor (CNET: not recommended, requires expert knowledge of your mobo, dangerous) Intel Thermal Analysis Tool (only does cores, not video card? has logging) Useful discussions I found: hardwarecanucks.com , superuser.com 1, 2 , forums.techarena.in (Update: I downloaded Real Temp 3.60 and it meets all my needs, the customizable alert temperature is great. Open Hardware Monitor seems to be the other one that mostly meets my needs, except no alerts; but it is portable. I tried SpeedFan but the interface is very cluttered, too much unnecessary detail (needs a Basic/Advanced mode and a revamp of the interface.) The answer to my underlying issue is nVidia Geforce LE 7500 video card which runs very hot.)

    Read the article

  • Using different SSDs types (not only SATA based) as system drive

    - by Hubert Kario
    Currently I have a Thinkpad X61s and want to make it both a bit faster and a bit more power efficient. For that reason I thought that adding SSD drive would make most sense. Unfortunately, because of financial reasons, buying SSD of over 200GB capacity is out of reach for me (not only it would be worth more than the rest of the laptop, but also I currently have a 500GB drive in it, so even such a drive would be kind of a downgrade for me). During preliminary testing with a cheap Transcend 4GB Class 6 (14MiB/s streaming, 9MiB/s random read) card I experienced boot times to be reduced by half so putting the OS only on it would already would be an improvement. Unfortunately, my system now is about 11GiB in size so anything less than 16GB would be constraining. In this laptop I can connect additional drives on at least 5 different ways: using SATA-ATA converter caddy in the X6 Ultrabase using internal mini PCIe slot using integrated SDHC slot using CardBus (a.k.a PCMCIA or PC Card) slot using USB Thankfully, because I use only Linux on this PC the bootability of them is irrelevant as I can put the /boot partition on internal HDD and / on any of the above mentioned Flash memories (as I already did for the SDHC test). From what I was able to research and from my own experience those options come with rather big downsides or other problems: SATA-ATA caddy It has three downsides: I have to carry the Ultrabse with me at all times (it's not really inconvenient, but those grams do add) and couldn't disconnect it when I want to disconnect the battery It makes the bay unusable for the optical drive and occasional quick access to other hard drives the only caddies I could buy have rather flaky controllers in them so putting my OS on it would hamper its stability Internal mini PCIe slot This would be an ideal solution, if only I could find real PCIe SSDs, not only devices that could talk only SATA or ATA over PCIe mechanical connection (the ones used in Dell Mini or Asus EEE). Theoretically Samsung did release such devices but I couldn't find them in retail anywhere. Integrated SDHC slot It's a nice solution with a single drawback: the fastest 16GB SDHC card on the market can only do around 35MiB/s read and 15MiB/s write while still costing like a normal 40GB SATA SSD that's 10 times faster. Not really cost-effective. CardBus (a.k.a PCMCIA or PC Card) slot Those cards are much faster than the SDHC option (there are ones that can do well over 50MiB/s read in benchmarks) and from what I could find the PCMCIA controller in my laptop does support UDMA so it should be able to deliver comparable speeds. They still cost similarly to SD cards but at least they provide streaming performance comparable to my current HDD. USB That's the worst option. Not only is it limited to 20-30MiB/s by the interface itself the drive would stick out of the laptop so it's a big no no. The question As such I think that going the "CF in a CardBus adapter" route will be the best option. My question is: did anyone try using CF cards in CardBus adapters as system drives with Linux on Thinkpad laptops? Laptops in general? What was the real-world performance? I don't have any CF cards so I can't check how well does it work with suspend/resume, or whatever it's easy to make it work in initramfs (I'm using ArchLinux and SD card was trivial — add 3 modules in single config line and rebuilding initramfs) so any tips/gotchas on this are welcome as well.

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >