Search Results

Search found 4432 results on 178 pages for 'fail'.

Page 22/178 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • "Windows detected a hard drive" issue in Windows 7 x64

    - by Jasiu
    I upgraded to the OCZ-Agility3 120GB from a 60 OCZ Vertex2 SSD. I cloned the drive from the Vertex to the new Agility. Everything seemed to have gone well and have not had any problems. Recently in the passed month I have gotten this error: I downloaded teh OCZToolboxMP and ran the SMART utility and don't see anything wrong: SMART READ DATA ModelNumber : OCZ-AGILITY3 Serial Number : OCZ-Y1945X77438P4NU6 WWN : 5-e8-3a-97 ebea5ba76 Revision: 10 Attributes List 1: SSD Raw Read Error Rate Normalized Rate: 70 total ECC and RAISE errors 5: SSD Retired Block Count Reserve blocks remaining: 100% 9: SSD Power-On Hours Total hours power on: 968 12: SSD Power Cycle Count Count of power on/off cycles: 28 171: SSD Program Fail Count Total number of Flash program operation failures: 0 172: SSD Erase Fail Count Total number of Flash erase operation failures: 0 174: SSD Unexpected power loss count Total number of unexpected power loss: 11 177: SSD Wear Range Delta Delta between most-worn and least-worn Flash blocks: 0 181: SSD Program Fail Count Total number of Flash program operation failures: 0 182: SSD Erase Fail Count Total number of Flash erase operation failures: 0 187: SSD Reported Uncorrectable Errors Uncorrectable RAISE errors reported to the host for all data access: 4145 194: SSD Temperature Monitoring Current: 30 High: 30 Low: 30 195: SSD ECC On-the-fly Count Normalized Rate: 120 196: SSD Reallocation Event Count Total number of reallocated Flash blocks: 100 201: SSD Uncorrectable Soft Read Error Rate Normalized Rate: 120 204: SSD Soft ECC Correction Rate (RAISE) Normalized Rate: 120 230: SSD Life Curve Status Current state of drive operation based upon the Life Curve: 100 231: SSD Life Left Approximate SDD life Remaining: 100% 241: SSD Lifetime writes from host lifetime writes 893 GB 242: SSD Lifetime reads from host lifetime reads 968 GB Does anyone have any ideas of what might be wrong and or how I can go about fixing this? Please let me know if there is other information I can provide. Thanks for your help Windows 7 x64 SP1 AMD Phenom II X4 940 8GB RAM

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Ensuring a repeatable directory ordering in linux

    - by Paul Biggar
    I run a hosted continuous integration company, and we run our customers' code on Linux. Each time we run the code, we run it in a separate virtual machine. A frequent problem that arises is that a customer's tests will sometimes fail because of the directory ordering of their code checked out on the VM. Let me go into more detail. On OSX, the HFS+ file system ensures that directories are always traversed in the same order. Programmers who use OSX assume that if it works on their machine, it must work everywhere. But it often doesn't work on Linux, because linux file systems do not offer ordering guarantees when traversing directories. As an example, consider there are 2 files, a.rb, b.rb. a.rb defines MyObject, and b.rb uses MyObject. If a.rb is loaded first, everything will work. If b.rb is loaded first, it will try to access an undefined variable MyObject, and fail. But worse than this, is that it doesn't always just fail. Because the file system ordering on Linux is not ordered, it will be a different order on different machines. This is worse because sometimes the tests pass, and sometimes they fail. This is the worst possible result. So my question is, is there a way to make file system ordering repeatable. Some flag to ext4 perhaps, that says it will always traverse directories in some order? Or maybe a different file system that has this guarantee?

    Read the article

  • Sun T5120: Memory issue with 8GB DIMMs, "Unsupported memory configuration"

    - by watain
    I'm trying to upgrade RAM in a Sun T5120 server, replacing 2GB (Sun P/N: 501-7953-01) with 8GB DIMMs (Sun P/N: 511-1262-01). When bringing up the host system, I get the following errors on the ILOM: -> show faulty Target | Property | Value --------------------+------------------------+--------------------------------- /SP/faultmgmt/0 | fru | /SYS/MB /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:42 faults/0 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU3 Forced fail faults/0 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:28 faults/1 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU2 Forced fail faults/1 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:13 faults/2 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU1 Forced fail faults/2 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:28:59 faults/3 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU0 Forced fail faults/3 | | (IBIST) /SP/faultmgmt/1 | fru | /SYS /SP/faultmgmt/1/ | timestamp | Dec 14 15:29:42 faults/0 | | /SP/faultmgmt/1/ | sp_detected_fault | Dec 14 15:29:42 ERROR: faults/0 | | Unsupported memory | | configuration As you can see, the only error message I get is "Unsupported memory configuration". Note that I'm absolutely sure that I placed in the DIMMs in the correct slots. Might this issue be related to the Voltage of the DIMMs? Any suggestions on how to trouble-shoot this issue? This issue seems to be similar to the one explained at "Inserted disabled" while upgrading Sun Sparc t5120 memory. However the given link http://docs.sun.com/source/820-4445-10/chapter1.html seems to point to an inexistent page...

    Read the article

  • Are there alternatives to Sysinternals ADInsight?

    - by mmcglynn
    I had been using ADInsight from Sysinternals to trace Active Directory calls from my workstation, but the application has failed. Where previously the Active Directory events were traced and logged, now the window remains blank, whether the application is in capture mode or not. I have run as Administrator, rebooted, downloaded a new version; none of those actions has returned the program to a functional state. The Sysinternals forums don't offer much hope, since this tool is known to fail often. Is there tool that has similar functionality? Questions Does the tool fail when run from another workstation with your account? Yes Does it fail from your (and/or) another workstation using someone else's account? Yes Is there anything in the event log of your workstation? No

    Read the article

  • Install IIS windows 7

    - by rad
    I have tried many way to install IIS in my Windows 7 Professionnel environment. And I have always an error. I tried with Web Plateform Api - "Fatal Error during installation" With Windows Features in configuration pannel - "An error has occurred. Not all of the features were successfully" With command line (http://forums.iis.net/t/1152513.aspx) - "Fatal Error during installation" in log file I have IIS.log : [04/10/2012 16:13:25] "C:\Windows\SysWOW64\inetsrv\aspnetca.exe" /install /basic 2.0.50727.0 [04/10/2012 16:13:25] < !!FAIL!! RegOpenKeyEx(HKLM\SYSTEM\CurrentControlSet\Services\ASP.NET_2.0.50727\Names) result=0x80070002 [04/10/2012 16:13:25] < !!FAIL!! SetAccessToPerfCountersKeys() result=0x80070002 [04/10/2012 16:13:25] < !!FAIL!! Installation failure, result=0x80070002 If I uninstall/reinstall IIS, I have some error but my site run correctly, and after windows restart, the install is rollbacked and I lose my installation. Any help ?

    Read the article

  • how to stop a driver from running - it self protected and rootkit hidden

    - by Aristos
    I have this serous problem For the first time I can not stop a program from running. Something is on one laptop computer that is run as system legacy driver, and self protected and hidden on service as rootkit. Anything I try to remove fails. When a program or anti toolkit try to remove the hidden registry setting for make it stop I get this error : "a device attached to the system is not functioning" So any idea that can help me stop it from running, or even delete it on start up ? My one limitation is that the hard drive is on a laptop and I can not remove it and attact it to somewhere else. This program not let me, touch the registry, do not let me touch the file, do not let me touch the file, The move on boot fail to delete it, the rootrepeal fail to delete it, the rootkiet reveal from sysinternals fail to reveal it ! everything fails. Do how have any experience on this, or do you have any suggestion how to stop this driver from run ?

    Read the article

  • Belarc Advisor (Store Passwords using Reversible Encryption)

    - by Steve
    Hi, I'm using Belarc Advisor to examine my PC. Part of BA is a security benchmark summary, which examines components of windows security and provides a benchmark rating. Two items are marked as Fail: - Store Passwords using Reversible Encryption - Password History Size I have opened the Local Security Settings tool from the Control Panel Administrative Tools, and ensured that the "Store passwords using reversible encryption" setting is enabled. Also, I've set the password history to a number. So I'm a bit miffed about the Fail marks. Any idea why the Fail marks appear? Any clues how I can Pass them? Thanks, Steve.

    Read the article

  • How to analyze a scenario where a bug didn't get caught and adjust development workflow to prevent similar errors

    - by durron597
    I had a bug that was really difficult to track down, because all the unit tests were green, but the production application didn't work properly. Here's what happened: I had a filter class that set my application to ignore data that was not in some specified time windows. The unit test, which seemed thorough to me, turned green. Additionally, my integration tests also produced results as expected. Production, however, did not work. As a result of the first two bullets, this problem was very difficult to find. It turned out the problem was that my test dates were using my time zone (America/Chicago) but the production data was providing dates in UTC, which I did not realize, and the logic for the filter wasn't correct for UTC dates. (I was using joda time DateTime objects). Where did my workflow break down? Did I fail to produce a spec that specified that the logic needed to handle dates in any time zone? Did I fail to thoroughly consider all cases at the unit test level? Did I fail to insure the integration test was sufficiently similar to production? Other? What changes can I make to my workflow to better prevent this sort of mistake in the future? How can I more effectively debug a problem when there is an issue in production but not in testing?

    Read the article

  • @CodeStock 2012 Review: Leon Gersing ( @Rubybuddha ) - "You"

    "YOU"Speaker: Leon GersingTwitter: @Rubybuddha Site: http://about.me/leongersing I honestly had no idea what I was getting in to when I sat down in to this session. I basically saw the picture of the speaker and knew that it would be a good session. I was completely wrong; it was the BEST SESSION of CodeStock 2012.  In fact it was so good, I texted another coworker attending the conference to get over and listen to Leon. Leon took on the concept of growth in the software development community. He specifically referred David Hansson in his ability to stick to his beliefs when the development community thought that he was crazy for creating Ruby on Rails. If you do not know this story Ruby on Rails is one of the fastest growing web languages today. In addition, he also touched on the flip side of this argument in that we must be open to others ideas and not discard them so quickly because we all come from differing perspectives and can add value to a project/team/community. This session left me with two very profound concepts/quotes: “In order to learn you must do it badly in front of a crowed and fail.” - @Rubybuddha I can look back on my career so far and say that he is correct; I think I have learned the most after failing, especially when I achieved this failure in front of other. “Experts must be able to fail.” - @Rubybuddha I think we can all learn from our own mistakes but we can also learn from others. When respected experts fail it is a great learning opportunity for the entire team as well as the person who failed. When expert admit mistakes and how they worked through them can be great learning tools for other developers so that they know how to avoid specific scenarios and if they do become stuck in the same issue they will know how to properly work their way out of them.

    Read the article

  • @CodeStock 2012 Review: Leon Gersing ( @Rubybuddha ) - "You"

    "YOU"Speaker: Leon GersingTwitter: @Rubybuddha Site: http://about.me/leongersing I honestly had no idea what I was getting in to when I sat down in to this session. I basically saw the picture of the speaker and knew that it would be a good session. I was completely wrong; it was the BEST SESSION of CodeStock 2012.  In fact it was so good, I texted another coworker attending the conference to get over and listen to Leon. Leon took on the concept of growth in the software development community. He specifically referred David Hansson in his ability to stick to his beliefs when the development community thought that he was crazy for creating Ruby on Rails. If you do not know this story Ruby on Rails is one of the fastest growing web languages today. In addition, he also touched on the flip side of this argument in that we must be open to others ideas and not discard them so quickly because we all come from differing perspectives and can add value to a project/team/community. This session left me with two very profound concepts/quotes: “In order to learn you must do it badly in front of a crowed and fail.” - @Rubybuddha I can look back on my career so far and say that he is correct; I think I have learned the most after failing, especially when I achieved this failure in front of other. “Experts must be able to fail.” - @Rubybuddha I think we can all learn from our own mistakes but we can also learn from others. When respected experts fail it is a great learning opportunity for the entire team as well as the person who failed. When expert admit mistakes and how they worked through them can be great learning tools for other developers so that they know how to avoid specific scenarios and if they do become stuck in the same issue they will know how to properly work their way out of them.

    Read the article

  • Naming a class that processes orders

    - by p.campbell
    I'm in the midst of refactoring a project. I've recently read Clean Code, and want to heed some of the advice within, with particular interest in Single Responsibility Principle (SRP). Currently, there's a class called OrderProcessor in the context of a manufacturing product order system. This class is currently performs the following routine every n minutes: check database for newly submitted + unprocessed orders (via a Data Layer class already, phew!) gather all the details of the orders mark them as in-process iterate through each to: perform some integrity checking call a web service on a 3rd party system to place the order check status return value of the web service for success/fail email somebody if web service returns fail constantly log to a text file on each operation or possible fail point I've started by breaking out this class into new classes like: OrderService - poor name. This is the one that wakes up every n minutes OrderGatherer - calls the DL to get the order from the database OrderIterator (? seems too forced or poorly named) - OrderPlacer - calls web service to place the order EmailSender Logger I'm struggling to find good names for each class, and implementing SRP in a reasonable way. How could this class be separated into new class with discrete responsibilities?

    Read the article

  • Is there a NVIDIA driver (for a 7-series card) that will actually work for 12.10?

    - by DS13
    I see many similar topics on this, but I've tried all their suggestions, and nothing has worked. ISSUE: I do a clean install of Ubuntu 12.10. Boots fine with the “nouveau” graphics driver – graphics are very slow and choppy. The three other driver options in Ubuntu (official NVIDIA drivers), all result in a variation of the black screen on boot up. There will be NO access to a command line/GUI in anyway what-so-ever (tried every option recommended out there, but the system is unusable at this stage). I can only reinstall, and try different drivers…and I only ever get one shot at it. QUESTIONS: Does anyone know of a NVIDIA driver that will actually work with a Nvidia GeForce 7350LE? Or a 7-series card in general? This is my second computer, and I’m just trying to get a working install of Ubuntu on it. I don’t want to put much money into it, as I have seen Ubuntu run great on much older/less capable machines. I’ve got a decent Intel processor (2.3Ghz), 2GB of RAM, 320GB hard drive, 32-bit architecture, and there is no other O/S installed. It appears as if the graphics card is holding me back. Should I just buy a cheap graphics card (non-NVIDIA) to put in as a replacement? TRIED SO FAR: -all drivers available in Ubuntu *all fail -manual install of some different NVIDIA drivers *all fail -also tried installing the generic kernel, Nvidia driver doesn't work in 12.10 *no difference -every method suggested to at least get a command line after switching to a NVIDIA driver *all fail

    Read the article

  • Is there a PROPRIETARY driver (NVIDIA or ATI) that actually works with 12.10?

    - by DS13
    NOTE: I see many similar topics on this, but I've tried all their suggestions, and nothing has worked. THE MAIN DIFFERENCE SEEMS TO BE: I always get a black screen with a blinking cursor, while others seem to get through the boot-up and see distorted graphics or just their wallpaper. ISSUE: I do a clean install of Ubuntu 12.10. Boots fine with the “nouveau” graphics driver – graphics (even just menus) are very slow, choppy, and distorted. The three other driver options in Ubuntu (official NVIDIA drivers), all result in a variation of the black screen on boot up. There will be NO access to a command line/GUI in anyway what-so-ever (tried every option recommended out there, but the system is unusable at this stage). I can only reinstall, and try different drivers…and I only ever get one shot at it. QUESTIONS: -Does anyone know of a PROPRIETARY driver that will actually work on 12.10 with a NVIDIA or ATI card? -Should I just buy a newer graphics card to put in as a replacement? MORE INFO: This is my second computer, and I’m just trying to get a working install of Ubuntu on it. I don’t want to put much money into it, as I have seen Ubuntu run great on much older/less capable machines. I’ve got a decent'ish Core2Duo Intel processor (2.13Ghz), 2GB of RAM, 320GB hard drive, 32-bit architecture, and there is no other O/S installed. It appears as if the graphics card (NVIDIA Geforce 7350 LE) is holding me back. TRIED SO FAR: -all drivers available in Ubuntu *all fail -manual install of some different NVIDIA drivers *all fail -also tried installing the generic kernel, Nvidia driver doesn't work in 12.10 *no difference -tried installing 12.04 *same results -every method suggested to at least get a command line after switching to a NVIDIA driver *all fail -UPDATE- Re-tried everything above with a new NVIDIA Geforce 210...same results for everything. -UPDATE #2- Re-tried everything above with a new AMD Radeon HD 6450...installed the proprietary driver from Ubuntu's "Software Sources" menu...EVERYTHING NOW WORKS. See "answer" below for summary.

    Read the article

  • Why does C# exit when calling the Ada elaboration routine using debug?

    - by erict
    I have a DLL created in Ada using GPS. I am dynamically loading it and calling it successfully both from Ada and from C++. But when I try to call it from C#, the program exits on the call to Elaboration init. What am I missing? The exact same DLL is perfectly happy getting called from C++ and Ada. Edit: If I start the program without Debugging, it also works with C#. But if I run it with the Debugger, then it exits on the call to ElaborationInit. There are no indications in any of the Windows event logs. If the Ada DLL is Pure, and I skip the elaboration init call, the actual function DLL is called correctly, so it has something to do with the elaboration. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; namespace CallingDLLfromCS { class Program { [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern IntPtr LoadLibrary(string dllToLoad); [DllImport("kernel32.dll", CharSet = CharSet.Ansi, SetLastError = true)] public static extern IntPtr GetProcAddress(IntPtr hModule, string procedureName); [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern bool FreeLibrary(IntPtr hModule); [UnmanagedFunctionPointer(CallingConvention.StdCall)] delegate int AdaCallable2_dlgt(int val); static AdaCallable2_dlgt fnAdaCallable2 = null; [UnmanagedFunctionPointer(CallingConvention.StdCall)] delegate void ElaborationInit_dlgt(); static ElaborationInit_dlgt ElaborationInit = null; [UnmanagedFunctionPointer(CallingConvention.StdCall)] delegate void AdaFinal_dlgt(); static AdaFinal_dlgt AdaFinal = null; static void Main(string[] args) { int result; bool fail = false; // assume the best IntPtr pDll2 = LoadLibrary("libDllBuiltFromAda.dll"); if (pDll2 != IntPtr.Zero) { // Note the @4 is because 4 bytes are passed. This can be further reduced by the use of a DEF file in the DLL generation. IntPtr pAddressOfFunctionToCall = GetProcAddress(pDll2, "AdaCallable@4"); if (pAddressOfFunctionToCall != IntPtr.Zero) { fnAdaCallable2 = (AdaCallable2_dlgt)Marshal.GetDelegateForFunctionPointer(pAddressOfFunctionToCall, typeof(AdaCallable2_dlgt)); } else fail = true; pAddressOfFunctionToCall = GetProcAddress(pDll2, "DllBuiltFromAdainit"); if (pAddressOfFunctionToCall != IntPtr.Zero) { ElaborationInit = (ElaborationInit_dlgt)Marshal.GetDelegateForFunctionPointer(pAddressOfFunctionToCall, typeof(ElaborationInit_dlgt)); } else fail = true; pAddressOfFunctionToCall = GetProcAddress(pDll2, "DllBuiltFromAdafinal"); if (pAddressOfFunctionToCall != IntPtr.Zero) AdaFinal = (AdaFinal_dlgt)Marshal.GetDelegateForFunctionPointer(pAddressOfFunctionToCall, typeof(AdaFinal_dlgt)); else fail = true; if (!fail) { ElaborationInit.Invoke(); // ^^^^^^^^^^^^^^^^^^^^^^^^^ FAILS HERE result = fnAdaCallable2(50); Console.WriteLine("Return value is " + result.ToString()); AdaFinal(); } FreeLibrary(pDll2); } } } }

    Read the article

  • Making sense of S.M.A.R.T

    - by James
    First of all, I think everyone knows that hard drives fail a lot more than the manufacturers would like to admit. Google did a study that indicates that certain raw data attributes that the S.M.A.R.T status of hard drives reports can have a strong correlation with the future failure of the drive. We find, for example, that after their first scan error, drives are 39 times more likely to fail within 60 days than drives with no such errors. First errors in re- allocations, offline reallocations, and probational counts are also strongly correlated to higher failure probabil- ities. Despite those strong correlations, we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever. Seagate seems like it is trying to obscure this information about their drives by claiming that only their software can accurately determine the accurate status of their drive and by the way their software will not tell you the raw data values for the S.M.A.R.T attributes. Western digital has made no such claim to my knowledge but their status reporting tool does not appear to report raw data values either. I've been using HDtune and smartctl from smartmontools in order to gather the raw data values for each attribute. I've found that indeed... I am comparing apples to oranges when it comes to certain attributes. I've found for example that most Seagate drives will report that they have many millions of read errors while western digital 99% of the time shows 0 for read errors. I've also found that Seagate will report many millions of seek errors while Western Digital always seems to report 0. Now for my question. How do I normalize this data? Is Seagate producing millions of errors while Western digital is producing none? Wikipedia's article on S.M.A.R.T status says that manufacturers have different ways of reporting this data. Here is my hypothesis: I think I found a way to normalize (is that the right term?) the data. Seagate drives have an additional attribute that Western Digital drives do not have (Hardware ECC Recovered). When you subtract the Read error count from the ECC Recovered count, you'll probably end up with 0. This seems to be equivalent to Western Digitals reported "Read Error" count. This means that Western Digital only reports read errors that it cannot correct while Seagate counts up all read errors and tells you how many of those it was able to fix. I had a Seagate drive where the ECC Recovered count was less than the Read error count and I noticed that many of my files were becoming corrupt. This is how I came up with my hypothesis. The millions of seek errors that Seagate produces are still a mystery to me. Please confirm or correct my hypothesis if you have additional information. Here is the smart status of my western digital drive just so you can see what I'm talking about: james@ubuntu:~$ sudo smartctl -a /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: WDC WD1001FALS-00E3A0 Serial Number: WD-WCATR0258512 Firmware Version: 05.01D05 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Thu Jun 10 19:52:28 2010 PDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 179 175 021 Pre-fail Always - 4033 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 270 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 1468 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 262 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 46 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 223 194 Temperature_Celsius 0x0022 105 102 000 Old_age Always - 42 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0

    Read the article

  • Old School Wizardry Tip: Batch File Comments

    - by jkauffman
    Johnny, the Endangered Keyboard-Driven Windows User Some of my proudest, obscure Windows tricks are losing their relevance. I know I’m not alone. Keyboard shortcuts are going the way of the dodo. I used to induce fearful awe by slapping Ctrl+Shift+Esc in front of the lowly, pedestrian Windows users. No windows key on the keyboard? No problem: Ctrl+Esc. No menu key on the keyboard: Shift+F10. I am also firmly planted in the habit of closing windows with the Alt+Space menu (Alt+Space, C); and I harbor a brooding, slow=growing list of programs that fail to support this correctly (that means you, Paint.NET). Every time a new version of windows comes out, the support for some of these minor time-saving habits get pared out. Will I complain publicly? Nope, I know my old ways should be axed to conserve precious design energy. In fact, I disapprove of fierce un-intuitiveness for the sake of alleged productivity. Like vim, for example. If you approach a program after being away for 5 years, having to recall encyclopedic knowledge is a flaw. The RTFM disciples have lost. Anyway, some of the items in my arsenal of goofy time-saving tricks are still relevant today. I wanted to draw attention to one that’s stood the test of time. Remember Batch Files? Yes, it’s true, batch files are fading faster than the world of print. But they're not dead yet. I still run into some situations where I opt to use batch files. They are still relevant for build processes, or just various development workflow tools. Sure, there’s powershell, but there’s that stupid Set-ExecutionPolicy speed bump standing in your way; can you really spare the time to A) hunt down that setting on all machines affected and/or B) make futile efforts to convince your coworkers/boss that the hassle was worth it? When possible, I prefer the batch file wild card. And whenever I return to batch files, I end up researching some of the unintuitive aspects such as parameters, quote handling, and ERRORLEVEL. But I never have to remember to use “REM” for comment lines, because there’s a cleaner way to do them! Double Colon For Eye-Friendly Comments Here is a very simple batch file, with pretty much minimal content: @ECHO OFF SETLOCAL REM This is a comment ECHO This batch file doesn’t do much If you code on a daily basis, this may be more suitable to your eyes: @ECHO OFF SETLOCAL :: This is a comment ECHO This batch file doesn’t do much Works great! I imagine I find it preferable due to the similarity to comments in other situations: // or ;  or # I’ve often make visual pseudo-line breaks in my code, and this colon-based syntax works wonders: @ECHO OFF SETLOCAL :: Do stuff ECHO Doing Stuff :::::::::::::::::::::::::::: :: Do more stuff ECHO This batch file doesn’t do much Not only is it more readable, but there’s a slight performance benefit. The batch file engine sees this as an invalid line label and immediately reads the following line. Use that fact to your advantage if this trick leads you into heated nerd debate. Two Pitfalls to Avoid Be aware of that there are a couple situations where this hack will fail you. It most likely won’t be a problem unless you’re getting really sophisticated with your batch files. Pitfall #1: Inline comments @ECHO OFF SETLOCAL IF EXIST C:\SomeFile.txt GOTO END ::This will fail :END Unfortunately, this fails. You can only have whitespace to the left of your comments. Pitfall #2: Code Blocks @ECHO OFF SETLOCAL IF EXIST C:\SomeFile.txt (         :: This will fail         ECHO HELLO ) Code blocks, such as if statements and for loops, cannot contain these comments. This is ultimately due to the fact that entire code blocks are processed as a single line. I originally learned this from Rob van der Woude’s site. He goes into more depth about the behavior of the pitfalls as well, if you are interested in further details. I hope this trick earns you serious geek rep!

    Read the article

  • Social Search: Looking for Love

    - by Mike Stiles
    For marketers and enterprise executives who have placed a higher priority on and allocated bigger budgets to search over social, it might be time to notice yet another shift that’s well underway. Social is search. Search marketing was always more of an internal slam-dunk than other digital initiatives. Even a C-suite that understood little about the new technology world knew it’s a good thing when people are able to find you. Google was the new Yellow Pages. Only with Google, you could get your listing first without naming yourself “AAAA Plumbing.” There were wizards out there who could give your business prominence in front of people who were specifically looking for what you offered. Other search giants like Bing also came along to offer such ideal matchmaking possibilities. But what if the consumer isn’t using a search engine to find what they’re looking for? And what if the search engines started altering their algorithms so that search placement manipulation was more difficult? Both of those things have started to happen. Experian Hitwise’s numbers show that visits to the major search engines in the UK dropped 100 million through August. Search engines are far from dead, or even challenged. But more and more, the public is discovering the sites and brands they need through advice they get via social, not search. You’ll find the worlds of social and search increasingly co-mingling as well. Search behemoths Google and Bing are including Facebook and Google+ into their engines. Meanwhile, Facebook and Twitter have done some integration of global web search into their platforms. So what makes social such a worthwhile search entity for brands? First and foremost, the consumer has demonstrated a behavior of acting on recommendations from social connections. A cry in the wilderness like, “Anybody know any good catering companies?” will usually yield a link (and an endorsement) from a friend such as “Yeah, check out Just-Cheese-Balls Catering.” There’s no such human-driven force/influence behind the big search engines. Facebook’s Mark Zuckerberg and others call it “Friend Mining.” It is, in essence, searching for answers from friends’ experiences as opposed to faceless code. And Facebook has all of those friends’ experiences already stored as data. eMarketer says search in an $18 billion business, and investors are really into it. So no shock Facebook’s ready to leverage their social graph into relevant search. What do you do about all this as a brand? For one thing, it’s going to lead to some interesting paid marketing opportunities around the corner, including Sponsored Stories bought against certain queries, inserting deals into search results, capitalizing on social search results on mobile, etc. Apart from that, it might be time to stop mentally separating social and search in your strategic planning and budgeting. Courting your fans on social will cumulatively add up to more valuable, personally endorsed recommendations for your company when a consumer conducts a search on social. Fail to foster those relationships, fail to engage, fail to provide knock-em-dead customer service, fail to wow them with your actual products and services…and you’ll wind up with the visibility you deserve in social search results.

    Read the article

  • Maven Multi-Module builds not honoring failsafe-maven-plugin?

    - by Mike Cornell
    I recently discovered that Hudson was not the problem. In actuality it was Maven itself as the multi-module build was causing the build failure, not Hudson. I just hadn't noticed where the issue actually existed. Leaving the original question here. I'm using the failsafe-maven-plugin to run some integration tests. The difference between failsafe and surefire is that failsafe allows failures and does not fail the build. On my nightly builds there are occasions that a service the integration tests use might be down. In normal builds, the failsafe plugin would let the build continue since the integration tests are allowed to fail. However, Hudson does not seem to respect this and stops the build and produces rain. I tried to turn the failsafe tests off on nightly builds using -DskipITs. This appears to fail since I'm in a multi module build. Any ideas on how to get Maven to respect that these tests can fail even though they're part of a specific module? The project structure is as follows: -parent \-jar \-jar (where integration tests run) \-war \-ear

    Read the article

  • PHPUnit Selenium captureScreenshotOnFailure does not work?

    - by user342775
    I am using PHPUnit 3.4.12 to drive my selenium tests. I'd like to be able to get a screenshot taken automatically when a test fails. This should be supported as explained at http://www.phpunit.de/manual/current/en/selenium.html#selenium.seleniumtestcase.examples.WebTest2.php class WebTest { protected $captureScreenshotOnFailure = true; protected $screenshotPath = 'C:\selenium'; protected $screnshotUrl = 'http://localhost/screenshots'; public function testLandingPage($selenium) { $selenium->open("http://www.example.com"); $selenium->fail("fail"); ... } } As you can see, I am making the test to fail and in theory when it does it should take a screenshot and put it in C:\selenium, as I am running the selenium RC server on Windows. However, when I run the test it will just give me the following: [root@testbox selenium]$ sh run PHPUnit 3.4.12 by Sebastian Bergmann. F Time: 8 seconds, Memory: 5.50Mb There was 1 failure: 1) WebTest::testLandingPage fail /home/root/selenium/WebTest.php:32 FAILURES! Tests: 1, Assertions: 0, Failures: 1. I do not see any screenshot in C:\selenium. I can however get a screenshot with $selenium-captureScreenshot("C:/selenium/image.png"); Any ideas or suggestions most welcome. Thanks

    Read the article

  • Environment variable names with parentheses, like %ProgramFiles(x86)%, in PowerShell?

    - by jwfearn
    How does one get the value of environment variable whose name contains parentheses in a PowerShell script? To complicate matters, some variables names contains parentheses while others have similar names without parenteses. For example (using cmd.exe): C:\>set | find "ProgramFiles" CommonProgramFiles=C:\Program Files\Common Files CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files ProgramFiles=C:\Program Files ProgramFiles(x86)=C:\Program Files (x86) We see that %ProgramFiles% is not the same as %ProgramFiles(x86)%. My PowerShell code is failing in a weird way because it's ignoring the part of the environment variable name after the parentheses. Since this happens to match the name of a different, but existing, environment variable I don't fail, I just get the right value of the wrong variable. Here's a test function in the PowerShell scripting language to illustrate my problem: function Do-Test { $ok = "C:\Program Files (x86)" # note space between 's' and '( $bad = "$Env:ProgramFiles" + "(x86)" # uses %ProgramFiles% $bin32 = "$Env:ProgramFiles(x86)" # LINE 6, I want to use %ProgramFiles(x86)% if ( $bin32 -eq $ok ) { Write-Output "Pass" } elseif ( $bin32 -eq $bad ) { Write-Output "Fail: %ProgramFiles% used instead of %ProgramFiles(x86)%" } else { Write-Output "Fail: some other reason" } } And here's the output: PS> Do-Test Fail: %ProgramFiles% used instead of %ProgramFiles(x86)% Is there a simple change I can make to line 6 above to get the correct value of %ProgramFiles(x86)%? *NOTE: In the text of this post I am using batch file syntax for environment variables as a convenient shorthand. For example %SOME_VARIABLE% means "the value of the environment variable whose name is SOME_VARIABLE". If I knew the properly escaped syntax in PowerShell, I wouldn't need to ask this question.*

    Read the article

  • convincing C# compiler that execution will stop after a member returns

    - by Sarah Vessels
    I don't think this is currently possible or if it's even a good idea, but it's something I was thinking about just now. I use MSTest for unit testing my C# project. In one of my tests, I do the following: MyClass instance; try { instance = getValue(); } catch (MyException ex) { Assert.Fail("Caught MyException"); } instance.doStuff(); // Use of unassigned local variable 'instance' To make this code compile, I have to assign a value to instance either at its declaration or in the catch block. However, Assert.Fail will never, to the best of my knowledge, allow execution to proceed past it, hence instance will never be used without a value. Why is it then that I must assign a value to it? If I change the Assert.Fail to something like throw ex, the code compiles fine, I assume because it knows that exception will disallow execution to proceed to a point where instance would be used uninitialized. So is it a case of runtime versus compile-time knowledge about where execution will be allowed to proceed? Would it ever be reasonable for C# to have some way of saying that a member, in this case Assert.Fail, will never allow execution after it returns? Maybe that could be in the form of a method attribute. Would this be useful or an unnecessary complexity for the compiler?

    Read the article

  • Failing Sata HDD

    - by DaveCol
    I think my HDD is fried... Could someone confirm or help me restore it? I was using Hardware RAID 1 Configuration [2 x 160GB SATA HDD] on a CentOS 4 Installation. All of a sudden I started seeing bad sectors on the second HDD which stopped being mirrored. I have removed the RAID array and have tested with SMART which showed the following error: 187 Unknown_Attribute 0x003a 001 001 051 Old_age Always FAILING_NOW 4645 I have no clue what this means, or if I can recover from it. Could someone give me some ideas on how to fix this, or what HDD to get to replace this? Complete SMART report: Smartctl version 5.33 [i686-redhat-linux-gnu] Copyright (C) 2002-4 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: GB0160CAABV Serial Number: 6RX58NAA Firmware Version: HPG1 User Capacity: 160,041,885,696 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 7 ATA Standard is: ATA/ATAPI-7 T13 1532D revision 4a Local Time is: Tue Oct 19 13:42:42 2010 COT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 433) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 54) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 100 253 006 Pre-fail Always - 0 3 Spin_Up_Time 0x0002 097 097 000 Old_age Always - 0 4 Start_Stop_Count 0x0033 100 100 020 Pre-fail Always - 152 5 Reallocated_Sector_Ct 0x0033 095 095 036 Pre-fail Always - 214 7 Seek_Error_Rate 0x000f 078 060 030 Pre-fail Always - 73109713 9 Power_On_Hours 0x0032 083 083 000 Old_age Always - 15133 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0033 100 100 020 Pre-fail Always - 154 184 Unknown_Attribute 0x0032 038 038 000 Old_age Always - 62 187 Unknown_Attribute 0x003a 001 001 051 Old_age Always FAILING_NOW 4645 189 Unknown_Attribute 0x0022 100 100 000 Old_age Always - 0 190 Unknown_Attribute 0x001a 061 055 000 Old_age Always - 656408615 194 Temperature_Celsius 0x0000 039 045 000 Old_age Offline - 39 (Lifetime Min/Max 0/22) 195 Hardware_ECC_Recovered 0x0032 070 059 000 Old_age Always - 12605265 197 Current_Pending_Sector 0x0000 100 100 000 Old_age Offline - 1 198 Offline_Uncorrectable 0x0000 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0000 200 200 000 Old_age Offline - 62 SMART Error Log Version: 1 ATA Error Count: 4645 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 4645 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 02 7b 86 b1 ea 00 00:38:52.796 READ DMA ec 03 45 00 00 00 a0 00 00:38:52.796 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:52.794 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:49.991 IDENTIFY DEVICE c8 00 04 79 86 b1 ea 00 00:38:49.935 READ DMA Error 4644 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 04 79 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:49.991 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:49.935 READ DMA Error 4643 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 06 77 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:41.513 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:38.706 READ DMA Error 4642 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 06 77 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:41.513 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:38.706 READ DMA Error 4641 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 06 77 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:41.513 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:38.706 READ DMA SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 15131 - # 2 Short offline Completed without error 00% 15131 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • Windows 2008 R2 Task Scheduler Failure

    - by Jonathan Parker
    I have an application (.exe) which I am running via a scheduled task on Windows Server 2008 R2. The task runs fine but when the .exe returns a non-zero exit code the task is still successful when it should fail. I get this message: Task Scheduler successfully completed task "\CustomerDataSourceETL - Whics" , instance "{a574f6b4-2614-413c-8661-bc35eaeba7cd}" , action "E:\applications\CCDB-ETL\CustomerDataSourceETLConsole.exe" with return code 214794259. How can I get task scheduler to detect that the return code is 0 and fail the task?

    Read the article

  • Windows 2008 R2 Task Scheduler Failure

    - by Jonathan Parker
    I have an application (.exe) which I am running via a scheduled task on Windows Server 2008 R2. The task runs fine but when the .exe returns a non-zero exit code the task is still successful when it should fail. I get this message: Task Scheduler successfully completed task "\CustomerDataSourceETL - Whics" , instance "{a574f6b4-2614-413c-8661-bc35eaeba7cd}" , action "E:\applications\CCDB-ETL\CustomerDataSourceETLConsole.exe" with return code 214794259. How can I get task scheduler to detect that the return code is 0 and fail the task?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >