Search Results

Search found 12829 results on 514 pages for 'hard'.

Page 19/514 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Intuitive view of what's using the hard drive so much on Windows 7?

    - by Aren Cambre
    Sometimes my hard drive usage is near 100%, and I have no idea what is causing it. Are there any utilities that can help diagnose excessive hard drive usage and have as intuitive of an interface as Task Manager's Processes tab, which I can sort by CPU usage? I am aware of using procmon, of adding columns to Task Manager's Processes tab like I/O Read Bytes and I/O Write Bytes, and using Resource Monitor's Disk tab. Too often, these don't give me useful information or clearly identify a single process that is hogging the disk.

    Read the article

  • 4K sectors transition: Why are hard drives moving to 4096 byte sectors, vs. 512 byte sectors?

    - by Chris W. Rea
    I've noticed that some Western Digital hard drives are now sporting 4K sectors, that is, the sectors are larger: 4096 bytes vs. the long-standing standard of 512 bytes. So: What's the big deal with 4K sectors? Is it marketing hype, or a real advantage? Why should somebody building a new PC care, or not, about 4K sectors? Why is this transition taking place now? Why didn't it happen sooner? Are there things to look out for when buying a 4K sector hard drive? e.g. incompatibility? Anything else we should know about 4K sectors?

    Read the article

  • How do I fix "Setup did not find any hard disk drives installed in your computer" error during Win XP Pro install?

    - by CT.
    I just bought a nettop. It came with WinXP Home. I first installed Win 7 on it. I wasn't that happy with the performance so I decided to go back to XP. I am using an external dvd drive and a Win XP Pro disc. I boot from the dvd drive and during the install get this error: Setup did not find any hard disk drives installed in your computer. Make sure any hard disk drives are powered on and properly connected to your computer, and that any disk-related hardware configuration is correct. This may involve running a manufacturer-supplied diagnostic or setup program. Setup cannot continue. To quit Setup, press F3. This is the nettop in question: http://www.newegg.com/Product/Product.aspx?Item=N82E16883103228

    Read the article

  • My Portable Hard Drive with USB3 didn't work when connected to My Laptop, but it working with USB2 properly

    - by Mohammad Hasan Esfahanian
    I have Western Digital My Passport Essential Portable Hard Drive with USB3 and Model:WDBACY5000ABK-EESN. Until about two or three months ago when I connected that to My Laptop USB3 port, that worked very well. But now when I'm connecting that to My device, The system does not detect any Hard Drives. When plug in the USB2 port is working properly. I connected that to another Laptop whit USB3 port but I had the same problem. I tested My Laptop port with a Flash Memory by USB3 and ports were healthy and I'm sure they are working. For this issue, I changed the windows, but it still did not work. What can I do? Thanks in advance.

    Read the article

  • Unable to detect windows hard drive while running Ubuntu 12.04 from USB

    - by eapen jacob
    I am completely new to Ubuntu. I experimented with Ubuntu 12.04 by running it from a USB drive, in-order to recover files from my hard disc. History: My laptop is an IBM R60 running windows 7. Suddenly it gave me an error stating "error 2100 - Hard drive initialization error". I have read all the forums and most of them suggested that I remove and replace my HDD and that did not work. And one site suggested to try using Ubuntu to recover files. I booted my system from USB, and once Ubuntu came up, I choose "Try Ubuntu". It came up fine and I was able to surf ,and do other things, etc. I was unable to to access my files which are on the hard disc and "Attached Devices" is grayed out. 1- Is there any way to gain access to my hard disc to recover the files? How do I navigate to search for my files. 2- Is it just simply not possible if the hard disc themselves are not working? Is that why I`m unable to find the drives. I know its a very novice question, but hoping someone would help me out. Thank you, Eapen

    Read the article

  • What's the best tool to use to automatically backup selected folders from Windows to my external hard drive?

    - by PhD
    I have a 1TB external hard drive and I'd like to periodically schedule backups of my "Libraries" in Windows to the external drive. I'd prefer if it could detect what files have changed and periodically transfer them to the drive instead of I having to do it manually. Is there a way in Windows 7 to do this automatically? If not, what are some external tools (preferably free) that I can use for this? EDIT: I've used Windows back-up and I find it restrictive for detecting changes and backing up automatically. That's all that I'm aware of. My WD hard drive had something for this but the application doesn't work any more and it wasn't that good either. So, I wish to know what are my options.

    Read the article

  • Why is my HDD going back from standy?

    - by Pablo
    My hard drives, connected to Ubuntu server are producing the following log every exactly 5 minutes. Nov 1 14:10:50 localhost kernel: [ 1602.884936] ata2.00: hard resetting link Nov 1 14:10:51 localhost kernel: [ 1603.226804] ata2.01: hard resetting link Nov 1 14:10:52 localhost kernel: [ 1604.274533] ata2.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 1 14:10:52 localhost kernel: [ 1604.274548] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 1 14:10:52 localhost kernel: [ 1604.356669] ata2.00: configured for UDMA/133 Nov 1 14:10:52 localhost kernel: [ 1604.375247] ata2.01: configured for UDMA/133 Nov 1 14:10:52 localhost kernel: [ 1604.375265] ata2: EH complete I don't think this is related to hard drive failure, because it happens for ALL hard drives connected and ONLY when I write spindown_time = 12 in /etc/hdparm.conf. The reason I add this value is to put disks into standby mode after 60 seconds, which is happening after that period (checked with hdparm -C). The first clue I thought that smartd was running and spinning the drive. However, I couldn't find it in ps -aux | grep smart. Additionally, iostat does show that nobody accessed those drives, since Blk_read, Blk_wrtn remain unchanged. I also killed all processes that may be doing something with hdd(eg SAMBA). So I guess the problem is solely with hdparm... I have no more clue where that 5 minute value hides.

    Read the article

  • Not detecting HDD after installing Ubuntu 12.04!

    - by Arthur
    After installing Ubuntu 12.04, which I'm using right now and it works great, my extra hard drive was detected the first time I turned on my computer but then it does not show anymore on my Home Folder, when I run the Disk Utility says that has a few errors but when I try to fix it, says that hard drive is busy and cannot do anything else. I've unmounted it and mounted again but nothing happens. Do you know what can be going on? Cause the first time that I was able to see my hard drive it had all the files I have, but I don't know if deleting the partition will format the whole hard drive or just the ubuntu files. THANKS in advance! BY THE WAY, I'm new to Ubuntu... :S

    Read the article

  • Is there a limit of max. number of files in external hard drive folder?

    - by tfs
    I have a FAT32 external hard drive where I keep backups downloaded from webserver. I have a directory with 30 subdirectories. One of the subdirectories contains 21381 files and when I try to copy more files into this directory I get 0x80070052 error. However,it's possible to copy one more file in this directory (only one) if I make it's name shorter (8 characters instead of 22 as it's original name). How do I solve this problem? Now I can not synchronize external hard disk files with server files which is very important for me.

    Read the article

  • What processes would make the selling of a hard drive that previously held sensitive data justifiable? [closed]

    - by user12583188
    Possible Duplicate: Securely erasing all data from a hard drive In my personal collection are an increasing number of relatively new drives, only put on the shelf due to upgrades; in the past I have never sold hard drives with used machines for fear of having the encrypted password databases that have been stored on them compromised, but as their numbers increase I find myself more tempted to do so (due to the $$$ I know they're worth on the used market). What tools then exist to make the recovery of data from said drives difficult to the extent that selling them could be justified? Another way of saying this would be: what tools/method exist for making the attempts at recovery of any data previously stored on a certain drive impractical? I assume that it is always possible to recover data from a drive that is in working order. I assume also there are some methods for preventing recovery of data due a program called dban, and one particular feature in macOSX that deals with permanently deleting data from a disk.

    Read the article

  • Windows 7 not detecting external hard drive but Ubuntu is detecting. Why?

    - by unlimit
    I have a 500 GB Toshiba external hard drive. Since yesterday Windows 7 stopped detecting it, however I do see it listed on the "Safely remove hardware and eject media" icon on the taskbar. Then I tried the same external hard drive on my Ubuntu and it detected it just fine. Ubuntu and Windows 7 are on the same laptop. I have dual boot. Can someone tell me why is it happening? Am I missing a driver in Windows 7? Additional info: This drive has worked perfectly fine in the past. I did not format this drive ever. It just stopped working yesterday in windows.

    Read the article

  • On a failing hard drive, I am able to view data but unable to copy it - why?

    - by Tom
    I have a 2.5" external hard drive that is failing. It's not making the expected 'clicking' noise that most hard drives and I am able to view the data, but I am unable to actually retrieve the data. I attempted to use SpinRite in order to access the data on the drive, but it didn't like the external drive. When I view the drive's property page, the drive shows that it's used space is at 100% and that it has 0 bytes available; however, the progress indicator under the drive icon in Windows Explorer shows that it's roughly 50% full (which is correct). When I attempt to run Windows' "Error Checking" tool and attempt to "scan for an attempt recovery of bad sectors," the tool begins to run then immediately closes with no error message. I am able to browse the contents of the drive using Windows Explorer. When I begin to try copying any given single file, the copy process begins, an indicator starts, and then the copy fails with no real error message. The Disk Management page in Computer Management under Control Panel also shows this drive has being 'Healthy.' I dropped the drive off at a data recovery store and they said that "The data seems to be intact, but an internal failure is preventing any information from being retrieved." They offered to provide me references to a data recovery specialist. I've also attempted to run CHKDSK on the drive (with and without arguments) but it returns the following error: The type of the filesystem is RAW. CHKDSK is not available for RAW drives. Before going the route of more expensive data recovery, I'm wondering if these symptoms sound familiar to anyone? Other questions... I'm willing to continue trying tools such as TestDisk and/or PhotoRec (as the majority of the data that I'd like to salvage are photos) but how long I should expect either tool to run given approximately 400GB of data? I'm also comfortable using Linux so I welcome any suggestions for utilities or tools and strategies with which you've had success.

    Read the article

  • Hard drive not correctly recognized on a new Windows 7 installation, but works correctly on Windows XP

    - by david
    I'm having problems configuring a hard disk in a brand new, clean Windows 7 installation. System specs: Hard disk: WD VelociRaptor WD6000HLHX (600 GB, 10000 RPM) Motherboard: Gigabyte Z77X-UD3H BIOS SATA mode set to AHCI (not RAID), with disk connected to SATA0 (6 Gb/s port). Windows 7 Enterprise SP1 64-bit The disk is recognized by the BIOS and is correctly identified, with the name and size correctly reported. Windows recognizes the disk itself and reports the device is functioning correctly, but it doesn't appear in Explorer. Disk Management shows the drive, but incorrectly states that it is uninitialized and has no partitions. If I try to initialize the drive, I get an error saying that "the system cannot find the file specified" (what file?). Before connecting the drive to the new machine, I partitioned and formatted it under Windows XP SP2, creating 2 partitions (MBR, not GPT) and copying over a boatload of data. However, none of this data appears under Windows 7. If I put the disk back into the Windows XP machine, I can access the disk and all of its data. Is it possible to get Windows 7 to correctly recognize the disk without having to erase it and start over? If so, how do I do so? I checked this question, which seems to cover the same issue, but it didn't help.

    Read the article

  • How can I write directly to my Zune HD hard drive?

    - by iamgoat
    When syncing photos to the Zune HD it resizes them down to a much lower resolution which means I cannot load a high res picture on it (comic book) and zoom in to read it. This defeats the whole purpose of having a zoom feature. There is a registry hack you can make to get the Zune to display under My Computer. Then if you killed the zune process while it's syncing you'd be able to access it like a hard drive and copy files to it. It seems like the more recent firmware and/or Zune software version now prevents this. How can I treat it like an HDD and copy files to it? I simply want to take my original pictures folder and copy it over the low resolution versions the Zune software resized it to. An alternative option would be to remove the hard drive from it and see if I can connect it to a computer directly, but I just got this and don't want to disassemble it yet. Note to Microsoft: Why do you allow me to set the encoding quality of music, but not photos?

    Read the article

  • How can I wipe my iPod classic and fix any bad sectors on the hard drive without killing it?

    - by Sam Meldrum
    My iPod never finishes syncing and only syncs audio, not pictures or video. Any ideas as to how I can fix it? My iPod classic 160GB worked well for a couple of years. I used to sync a lot of photos at full resolution to it, but this recently stopped working after I moved to Windows 7. iTunes is on latest version - 9.1.1.12 iPod software is up to date - 1.1.2 Windows 7 is fully up to date and patched The symptoms are that the iPod will start to sync, all audio (music and podcasts will sync successfully) but the syncing will then just appear to continue - itunes message: Syncing iPod. Do not Disconnect. This sync never completes - I have left it trying for days. I have tried resetting the iPod using the Restore button, whereupon it restarts sync from default options and again will sync audio, but nothing else. I suspect that something has gone wrong on the hard-drive - either a bad sector or some corrupt data. Is there a process I can go through to fix this? E.g. SpinRite or a format? If so how do I go about formatting an iPod and will it be recognised as an iPod after format and work as normal? Any advice on what to try next much appreciated? Update I have eliminated problems with the files, PC or iTunes as they sync fine to other iPods. I have also eliminated the cable by trying different cables which work with other iPods. What I'd really like to know is if there is any way to more fundamentally wipe the iPod safely, attempt to repair any bad sectors on the hard drive and then start from scratch. Anyone ever managed this?

    Read the article

  • Can a OS be copied from one hard drive to another and still boot?

    - by AlexMorley-Finch
    Background My computer gets stuck on the make and model screen after the BIOS screen, aka the Toshiba screen. After some research I've realized that the problem is the hard drive. I'm using an old 250gb model that USED to be used for backup purposes, however I loaded windows 7 ultimate onto it This hard drive has trouble getting up to full RPM therefore cannot boot correctly until its warmed up. meaning that my pc needs to be restarted several times before it boots (once it took my 13 reboots to get my pc on!) From my research its either that, or lack of power supply, and I've tried multiple PSUs. Question I have my OS and all my files on this 250gb HDD... If I were to literally open the explorer, and copy EVERYTHING (including hidden files obviously) from this 250gb, to a spare 500gb I've got knocking about... Will it boot if I just copy everything? I cannot be bothered to load another OS onto my PC so if there is a way I can just copy the existing one over from one HDD to another and have it boot normally. This would be epic! I've heard about HDD cloning software. But before I purchase and/or download this software, I need to know if i can just copy the OS over through the windows explorer

    Read the article

  • Windows 7 does not detect my new hard drive?

    - by jasondavis
    I just built a really nice new PC. Some specs... Intel i7-930 CPU ASRock Extreme X58 motherboard with sata 3 and USB 3.0 12gb of G-Skill DDR3 RAM 80gb Intel G2 Solid state drive for Windows 7 and other programs to run on Windows 7 Pro 64bit OS 2 1gb grapghic cards for 4 monitor support Thats the main components. Well today my new 1tb western digital hard drive came which I plan to use for data to preserve the life of my SSD (hopefully). I hooked up it's sata power input and then hooked up it's sata data cable to a sata 2 port on my motherboard, I boot windows 7 and go into my computer and the drive is not showing up with my other drives. I then re-boot again and check again, no luck. I then shut down the PC and open the case back up, I then check my connections and they all look good. I then boot up and I can see the new HD is on and spinning. I then go into my BIOS settings to see if it registers there and it DOES! It shows I have a WD 1tb hard drvie on sata 2 port 6. So I am at a loss of why it is not showing up as an option in windows? Windows acts as if the drive is not there. Please help

    Read the article

  • Connect USB hard drive to wireless router on RJ45 port? Possible?

    - by lawphotog
    just a quick story behind. I was trying to set up wireless networked hard drive at home. My wireless router doesn't take USB. I am considering few options. First i was considering to get something like WD My Cloud. My router is an old one provided by service provider. It only has 10/100 Ethernet. WD My Cloud has Gigabit interface. So unless i changed a new router, data transfer will be slow. So upgrading the router is a must if i want fast transfer speed. Plus I already own an external hard drive with USB 3.0 interface. So if I get a router like Netgear D6300, i can get a decent speed wireless shared drive at home. And i can use my existing HDD instead of WD My Cloud. But the router isn't cheap so I am saving up for that. In the meantime I found out the existence of USB to RJ45 adaptor. I read the reviews and some say it works for them and for some don't. They didn't really say what they were trying to do so I'm confused. So if i bought an adaptor like this, can i connect my existing HDD (USB) with my existing router (RJ45) and use it as a shared drive for data transfer? I know it will be slow as the adaptor will only have USB 2.0 and 10/100 for Ethernet. But it's fine as it's for temporary until i got my new router.

    Read the article

  • Hard Drive problem: is it the SATA controller or the HDD itself?

    - by Drooling_Sheep
    I have a Samsung 1.5TB hard drive hooked up to an ECS H55H-I mini-ITX motherboard. I have XBMC 10 (modified Ubuntu 10.04) installed for use as an HTPC. The hard drive encounters occasional errors during normal use which cause it to be remounted read-only. I have updated the BIOS on the motherboard, changed the SATA cable and moved it to different ports on the motherboard, installed and re-installed the OS (including different versions of XBMC and generic ubuntu), all to no avail. I recently ran tests both with badblocks -sv and smartctl -t long. Both reported no errors. This makes me think the motherboard or SATA controller is probably the issue. Does anyone know of any further tests I can do to help narrow this down? The processor is a Core i3. I forget the model number but it's one of the 32nm ones with on-package graphics. There's no discrete video card or optical drive. The power supply is a 150W Rosewill (pretty sure) that came with the case.

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • Why are there hard faults when my RAM is not 100% used?

    - by Vilx-
    I've got 2GB of RAM and the resource monitor shows that it's only used about 75%. However there are some apps (NetBeans, Visual Studio) that every once in a while start making a lot of hard faults (up to and over 2000/min), thus predictably slowing down to a crawl. How is this so? The memory usage during these "fits" doesn't change. Perhaps it also includes memory mapped files or something?

    Read the article

  • How to upgrade the hard drive in a MacBook Pro?

    - by John McC
    I have a MacBook Pro with Snow Leopard that I want to upgrade to a new larger hard drive. I also have a current Time Machine backup on an external USB drive and an external SATA case I that I can put a 2.5" drive in. What's the best procedure for transferring the existing installation to the new drive?

    Read the article

  • Hard Drives: Always on or spin up/down as needed?

    - by Terminal Frost
    The specific application is a NAS that hosts media content and is frequently accessed during the day. My NAS was probably cycling on and off around 5 times a day so I decided to not allow it to spin down. I'm thinking this will be better for the drive, but I am not sure. I am wondering, however, if there is any concrete information out there as to what causes more detrimental wear on a hard disk: Spinning 24/7 or cycling on and off as needed?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >