Search Results

Search found 17972 results on 719 pages for 'always on'.

Page 26/719 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Is the use of explicit ' == true' comparison always bad? [closed]

    - by Slomojo
    Possible Duplicate: Make a big deal out of == true? I've been looking at a lot of code samples recently, and I keep noticing the use of... if( expression == true ) // do something... and... x = ( expression == true ) ? x : y; I've tended to always use... x = ( expression ) ? x : y; and... if( expression ) // do something... Where == true is implicit (and obvious?) Is this just a habit of mine, and I'm being picky about the explicit use of == true, or is it simply bad practice?

    Read the article

  • Windows: what is the difference between DEP always on and DEP opt-out with no exceptions?

    - by Peter Mortensen
    What is the difference between DEP always on ("/NoExecute=AlwaysOn" in boot.ini) and DEP opt-out ( "/NoExecute=OptOut" in boot.ini) with no exceptions? "no exceptions" = empty list of programs for which DEP does not apply. DEP = Data Execution Prevention (hardware). One would expect it to work the same way, but it makes a difference for some applications. E.g. for all versions of UltraEdit 14 (14.2). It crashes at startup for DEP always on, at least on Microsoft Windows XP Professional Edition x64 edition. (2010-03-11: this problem has been fixed with UltraEdit 15.2 and later.) Update 1: I think this difference is caused by the backdoors that Microsoft has put into hardware DEP for OptOut, according to Fabrice Roux (see below). In the case of IrfanView, for which Steve Gibson observed the same difference as I did for UltraEdit (see below), the difference is caused by a non-DEP aware EXE packer (ASPack) that Microsoft coded a backdoor for. Is there a difference between Windows XP, Windows Vista and Windows 7 ? Is there a difference between 32 bit and 64 bit versions of Windows ? Sources: From [http://blog.fabriceroux.com/index.php/2007/02/26/hardware_dep_has_a_backdoor?blog=1], "Hardware DEP has a backdoor" by Fabrice Roux. 2007-02-26. "IrfanView was not using any trick to evade DEP ... Microsoft just coded a backdoor used only in OPTOUT. Bascially Microsoft checks the executable header for a section matching one of the 3 strings. If one these strings is found, DEP will be turned OFF for this application by windows. ... 'aspack', 'pcle', 'sforce'" From [http://www.grc.com/sn/sn-078.htm], by Steve Gibson. "I can’t find any documentation on Microsoft’s site anywhere, because we’re seeing a difference between always-on and opt-out. That is, you would imagine that always-on mode would be the same as opting out if you weren’t having any opt-out programs. It turns out it’s not the case. For example ... the IrfanView file viewer ... runs fine in opt-out mode, even if it has not been opted out. But it won’t launch, Windows blocks it from launching ... in always-on mode." From [http://www.grc.com/sn/sn-083.htm], by Steve Gibson. "... IrfanView ... won’t run with DEP turned on. It’s because it uses an EXE packer, an executable compression program called ASPack. And it makes sense that it wouldn’t because naturally an executable compressor has got to decompress the executable, so it allocates a bunch of data memory into which it decompresses the compressed executable, and then it runs it. Well, it’s running a data allocation, which is exactly what DEP is designed to stop. On the other hand, UPX, which is actually the leading and most popular EXE compressor, it’s DEP- compatible because those guys realized, hey, when we allocate this memory, we should mark the pages as executable."

    Read the article

  • What popular "best practices" are not always best, and why?

    - by SnOrfus
    "Best practices" are everywhere in our industry. A Google search on "coding best practices" turns up nearly 1.5 million results. The idea seems to bring comfort to many; just follow the instructions, and everything will turn out fine. When I read about a best practice - for example, I just read through several in Clean Code recently - I get nervous. Does this mean that I should always use this practice? Are there conditions attached? Are there situations where it might not be a good practice? How can I know for sure until I've learned more about the problem? Several of the practices mentioned in Clean Code did not sit right with me, but I'm honestly not sure if that's because they're potentially bad, or if that's just my personal bias talking. I do know that many prominent people in the tech industry seem to think that there are no best practices, so at least my nagging doubts place me in good company. The number of best practices I've read about are simply too numerous to list here or ask individual questions about, so I would like to phrase this as a general question: Which coding practices that are popularly labeled as "best practices" can be sub-optimal or even harmful under certain circumstances? What are those circumstances and why do they make the practice a poor one? I would prefer to hear about specific examples and experiences.

    Read the article

  • WD1000FYPS harddrive is marked 0 mb in 3ware (and no SMART)

    - by osgx
    After reboot my SATA 1TB WD1000FYPS (previously is was "Drive error") is marked 0 mb in 3ware web gui. Complete message: Available Drives (Controller ID 0) Port 1 WDC WD1000FYPS-01ZKB0 0.00 MB NOT SUPPORTED [Remove Drive] SMART gives me only Device Model and ATA protocol version 1 (not 7-8 as it must be for SATA) What does it mean? Just before reboot, when is was marked only with "Device Error", smart was: Device Model: WDC WD1000FYPS-01ZKB0 Serial Number: WD-WCASJ1130*** Firmware Version: 02.01B01 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Mar 7 18:47:35 2010 MSK SMART support is: Available - device has SMART capability. SMART support is: Enabled SMART overall-health self-assessment test result: PASSED SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0003 188 186 021 Pre-fail Always - 7591 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 229 5 Reallocated_Sector_Ct 0x0033 199 199 140 Pre-fail Always - 3 7 Seek_Error_Rate 0x000e 193 193 000 Old_age Always - 125 9 Power_On_Hours 0x0032 078 078 000 Old_age Always - 16615 10 Spin_Retry_Count 0x0012 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0012 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 77 192 Power-Off_Retract_Count 0x0032 198 198 000 Old_age Always - 1564 193 Load_Cycle_Count 0x0032 146 146 000 Old_age Always - 164824 194 Temperature_Celsius 0x0022 117 100 000 Old_age Always - 35 196 Reallocated_Event_Count 0x0032 199 199 000 Old_age Always - 1 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 What can be wrong with he? Can it be restored? PS new smart is === START OF INFORMATION SECTION === Device Model: WDC WD1000FYPS-01ZKB0 Serial Number: [No Information Found] Firmware Version: [No Information Found] Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 1 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon Mar 8 00:29:44 2010 MSK SMART is only available in ATA Version 3 Revision 3 or greater. We will try to proceed in spite of this. SMART support is: Ambiguous - ATA IDENTIFY DEVICE words 82-83 don't show if SMART supported. Checking for SMART support by trying SMART ENABLE command. Command failed, ata.status=(0x00), ata.command=(0x51), ata.flags=(0x01) Error SMART Enable failed: Input/output error SMART ENABLE failed - this establishes that this device lacks SMART functionality. A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. PPS There was a rapid grow of " 192 Power-Off_Retract_Count " before dying. The hard was used in raid, with several hards from the same fabric packaging box (close id's). The hard drives were placed identically. Rapid means almost linear grow from 300 to 1700 in 6-7 hours. Maximal temperature was 41C. (thanks to munin's smart monitoring)

    Read the article

  • 5 year old ubuntu system, always dist-upgraded => ok. however some tasks remain

    - by knb
    I have a PC with a current ubuntu distribution installed. I've upgraded many times since 5.10. It always went well, however some tools or features were kindof left behind in a unsatisfactory state: grub to grub2 - is it an really necessary to switch the boot loader some time to grub2. Upgrading this scares me abit. I still have ext3 devices - is it worth upgrading to ext4? should I wait for btrfs? hibernation and suspend- it only worked in 5.10, since 6.04 it was messed up. Should I really care? Any chance to repair this myself? Simply by cleanup or hacking config files. It is a desktop PC after all. So energy saving functionality is not really needed. I am using vmware workstation 6.5 and the latest kernel that supports it is 2.6.32. This is my default kernel now, ignoring 2.6.35. Am I missing anything important in the new kernel now?

    Read the article

  • Why is the link between my switch and my router always negotiating half-duplex mode?

    - by Massimo
    I have a Cisco 2950 switch which has one of its ports connected to an Internet router provided by my ISP; I have no access to the router configuration, but I manage the switch. If I leave all switch ports with their default setup (auto-negotiation of speed and duplex mode), this link always connects at 100 MBit/s, but in half-duplex mode. I've tried replacing the cable, and also moving the link to another switch port: the result is always the same. A different device connected to the same port (or to any switch port, really) shows no problem at all. It could be guesed that someone configured the router to only connect in half-duplex mode... BUT, here's the catch: if I manually force the switch port to full-duplex mode (duplex full in the interface configuration), the link goes up, stays up and is completely stable. So: The connection is not forced to half-duplex mode by the router, otherwise it would not connect at all if I force the switch end to full-duplex. There is no actual link problem, otherwise the full-duplex connection would not go up or would at least show some errors. But if I leave the port free to auto-negotiate, it always connects in half-duplex mode. Why?

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

  • Is there a way to make NoScript always allow .pdf files?

    - by Ben
    I'm using Firefox with NoScript to stop the bad stuff. I've also told Acrobat Reader to load .pdf files in it's own window instead of inside the browser (because sometimes it locks up, and then I would have to restart the browser). However, whenever I come across a .pdf file, I always get a new tab completely covered by the NoScript box. Then, I can click anywhere in that page, and NoScript asks me if I'm sure I want to allow it. Then, Acrobat Reader is launched in its own window, but the Firefox tab remains, and I have to close it. It seems like NoScript is getting in the way of Acrobat's attempt to just open the file without making a new tab. Is there a way to tell NoScript to always allow .pdf files (Or any other suggestion to make that annoying blank tab go away by itself)?

    Read the article

  • Firefox 16: Setting to bookmark always to Toolbar by Shortcut?

    - by Echt Einfach TV
    The question sounds easy but I could not find a solution using google search. Maybe you have an idea: I am searching for the Firefox setting to bookmark always to the toolbar. If I click CTRL+D the default "Folder" is "Bookmarks Menu". I'd like to have it always "Bookmarks Toolbar", without the need of changing it every time. Screenshot: http://i.imgur.com/wzxt5.png PS: I know that you can use the mouse and drag the icon from the URL to the toolbar. But using the keyboard more and more as it is just faster, this is not an option.

    Read the article

  • Making sense of S.M.A.R.T

    - by James
    First of all, I think everyone knows that hard drives fail a lot more than the manufacturers would like to admit. Google did a study that indicates that certain raw data attributes that the S.M.A.R.T status of hard drives reports can have a strong correlation with the future failure of the drive. We find, for example, that after their first scan error, drives are 39 times more likely to fail within 60 days than drives with no such errors. First errors in re- allocations, offline reallocations, and probational counts are also strongly correlated to higher failure probabil- ities. Despite those strong correlations, we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever. Seagate seems like it is trying to obscure this information about their drives by claiming that only their software can accurately determine the accurate status of their drive and by the way their software will not tell you the raw data values for the S.M.A.R.T attributes. Western digital has made no such claim to my knowledge but their status reporting tool does not appear to report raw data values either. I've been using HDtune and smartctl from smartmontools in order to gather the raw data values for each attribute. I've found that indeed... I am comparing apples to oranges when it comes to certain attributes. I've found for example that most Seagate drives will report that they have many millions of read errors while western digital 99% of the time shows 0 for read errors. I've also found that Seagate will report many millions of seek errors while Western Digital always seems to report 0. Now for my question. How do I normalize this data? Is Seagate producing millions of errors while Western digital is producing none? Wikipedia's article on S.M.A.R.T status says that manufacturers have different ways of reporting this data. Here is my hypothesis: I think I found a way to normalize (is that the right term?) the data. Seagate drives have an additional attribute that Western Digital drives do not have (Hardware ECC Recovered). When you subtract the Read error count from the ECC Recovered count, you'll probably end up with 0. This seems to be equivalent to Western Digitals reported "Read Error" count. This means that Western Digital only reports read errors that it cannot correct while Seagate counts up all read errors and tells you how many of those it was able to fix. I had a Seagate drive where the ECC Recovered count was less than the Read error count and I noticed that many of my files were becoming corrupt. This is how I came up with my hypothesis. The millions of seek errors that Seagate produces are still a mystery to me. Please confirm or correct my hypothesis if you have additional information. Here is the smart status of my western digital drive just so you can see what I'm talking about: james@ubuntu:~$ sudo smartctl -a /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: WDC WD1001FALS-00E3A0 Serial Number: WD-WCATR0258512 Firmware Version: 05.01D05 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Thu Jun 10 19:52:28 2010 PDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 179 175 021 Pre-fail Always - 4033 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 270 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 1468 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 262 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 46 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 223 194 Temperature_Celsius 0x0022 105 102 000 Old_age Always - 42 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0

    Read the article

  • Why is clip space always referred to as "homogeneous clip space"?

    - by Nathan Ridley
    I've noticed in almost everything I've read so far that the term "clip space" is prepended with the word "homogeneous". Now I understand that it roughly means "all the same", but I don't understand why there is the express need to say "homogeneous clip space". When is clip space not homogeneous and why do we need to differentiate? And for that matter, what exactly does it mean that we're calling it "homogeneous clip space"? Homogenous in relation to what? In what way are the vertices "all the same"?

    Read the article

  • How to force browsers to always reload xslt files?

    - by bitmask
    Related: Apache: How can I force the browser to reload CSS files? I'm building an xml page (on an apache2) that is supposed to be translated to xhtml by the browser, so my server also serves a main.xslt which is used as stylesheet by the xml file, similar to the scenario with the css files in the linked question. However, none of tricks provided in either that answer, nor some issues on SO solve the issue for Opera. While Firefox responds to F5 by fetching not only the xml file but also the xslt file, Opera only reloads the xml file. I tried both, setting the Last-Modified HTTP header via an .htaccess file and using the expires module of apache2. This is what my .htaccess looks right now: AddType text/xsl;charset=utf-8 .xslt ExpiresByType text/xsl "modification plus 1 second" Header set Last-Modified "Wed, 08 Jan 2000 23:11:55 GMT" #Header set Last-Modified "Wed, 08 Jan 2020 23:11:55 GMT" If I open the xsl myself and manually reload it, the xml presentation is updated as well, but this is tedious for development. Note: There is no php or any kind of scripting involved. Everything is static.

    Read the article

  • Always use dtexec.exe to test performance of your dataflows. No exceptions.

    - by jamiet
    Earlier this evening I posted a blog post entitled Investigation: Can different combinations of components effect Dataflow performance? where I compared the performance of three different dataflows all working to the same overall goal. I wanted to make one last point related to the results but I thought it warranted a blog post all of its own. Here is a screenshot of one of the dataflows that I was testing: Pretty complicated I’m sure you’ll agree. Now, when I executed this dataflow in the test it was executing in ~19seconds however in that case I was executing using the command-line tool dtexec. I also tried executing inside the BIDS development environment and in that case it took much longer – 139seconds. That’s more than seven times as long. The point I want to make is very simple. If you are testing your dataflows for performance please use dtexec. Nothing else will suffice. @Jamiet

    Read the article

  • How to convincing Programmers that 'being in the zone' [coding] isn't always beneficial for the project?

    - by hawkeye
    In this book review: http://books.slashdot.org/story/11/06/13/1251216/Book-Review-The-Clean-Coder?utm_source=slashdot&utm_medium=twitter Chapter 4 talks about the coding process itself. One of the hardest statements the book makes here is to stay out of "the zone" when coding. Bob asserts that you lose parts of the big picture when you go down to that level. While I may struggle with that assertion, I do agree with his next statement that debugging time is expensive, so you should avoid having to do debugger-driven development whenever possible. He finishes the chapter with examples of pacing yourself (walking away, taking a shower) and how to deal with being late on your projects (remembering that hope is not a plan, and being clear about the impact of overtime) along with a reminder that it is good to both give and receive help, whether it be small questions or mentoring others. they talk about how 'being in the zone' - can actually be detrimental to the project. How do you convince your team members that this is the case?

    Read the article

  • Why opening Ubuntu's default wallpaper (warty-final-ubuntu.png) in Image Viewer always fails?

    - by Kush
    Everytime I try to open Ubuntu (any version, since from 8.04 with which I started) default wallpaper, named "warty-final-ubuntu.png", I get the following error. I have also reported bug for the same, more than a year ago but it is still unresolved. Also I don't get the point why the default wallpaper is still named as "warty-final-ubuntu.png" instead of having actual code name prefix to which wallpaper belongs eg. "precise-final-ubuntu.png" and so on. General Thoughts Lots of community effort goes under development of this marvelous distribution but we're still missing out to fix such silly issues, which is directly/indirectly affecting the number of new adopters.

    Read the article

  • two <select> always next to each other inseide <td> ? [closed]

    - by Radek
    I have to selects inside td and I want to make sure that they are next to each other at all times but td's width is width of these two selects. Not more. The thing is that value to be displayd in selects changes based on data. <td> <select name="db2.rfthdd"> <option value="WEI">WEI</option> <option value="SCOTSdatabase">SCOTSdatabase</option> </select> <select id="db2.rfttimestamp"> <option value="20110302122831">2011-03-02-122831</option> <option value="20110302122442">2011-03-02-122442</option> </select> </td>

    Read the article

  • Should I always be checking every neighbor when building voxel meshes?

    - by Raven Dreamer
    I've been playing around with Unity3d, seeing if I can make a voxel-based engine out of it (a la Castle Story, or Minecraft). I've dynamically built a mesh from a volume of cubes, and now I'm looking into reducing the number of vertices built into each mesh, as right now, I'm "rendering" vertices and triangles for cubes that are fully hidden within the larger voxel volume. The simple solution is to check each of the 6 directions for each cube, and only add the face to the mesh if the neighboring voxel in that direction is "empty". Parsing a voxel volume is BigO(N^3), and checking the 6 neighbors keeps it BigO(7*N^3)-BigO(N^3). The one thing this results in is a lot of redundant calls, as the same voxel will be polled up to 7 times, just to build the mesh. My question, then, is: Is there a way to parse a cubic volume (and find which faces have neighbors) with fewer redundant calls? And perhaps more importantly, does it matter (as BigO complexity is the same in both cases)?

    Read the article

  • Booting Ubuntu 12.04 in Unity the Windows Theme & Icon Theme reversed back to and always locked to Ubuntu default

    - by Antonio
    I did set up last year Faience-Ocre as my Icon Theme and Adwaita Cupertino L Unity as my Windows Theme (the GTK+ thme was unchanged to Adwaita (default)). It has worked perfectly well until 3 days ago when starting my PC I saw the Ubuntu default showing up for Windows & Icon Theme. I noticed that at start-up the disk access LED is not lit up continuously as before but at moment stops reading for a few seconds (up to 15 s) then complete the disk reading process. When all was working well this LED would lit up continuously. Another thing is that GNOME applications are as well not working well as previously Nautilus and Gedit now don't use the global menu in the system bar but a local window menu. Nautilus - Nemo before the incident Nautilus - Nemo now ... I did open dconf to check the desktop settings in org-gnome-desktop-wm-preferences and everything is looking good. When I change the settings in the app Advanced Settings in the Theme folder I see the respective value changing in dconf. However, there is no change on my desktop. It looks like it's crippled and GNOME related. [Update 1]: I have the same defect as referenced @ ubuntu theme suddenly changed to default and it's not coming back! instead of my GTK theme I get a classic, Windows-95-like grayish theme ... However one of the solution mentionned: http://www.webupd8.org/2011/06/fix-ubuntu-linux-mint-theme-changing-to.html is not working at all, even for 20 s up to 60 s delay.

    Read the article

  • Why are the packages found with apt-get always horribly out of date?

    - by Andrew
    Whenever I use the package manager, it can only ever find really old versions of stuff. Example: sudo apt-get update sudo apt-get install postgresql The best it can do is version 8.4 (3 years out of date). Trying to get a later version, I get: $ sudo apt-get install postgresql-9.1 Reading package lists... Done Building dependency tree Reading state information... Done E: Couldn't find package postgresql-9.1 I experience the same issue whenever I use the package manager, so I usually just download and build things from source. How can I make it find up-to-date software?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >