Search Results

Search found 22380 results on 896 pages for 'hard drive failure'.

Page 160/896 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • How do brand laptop manufacturers restrict hard disk drive?

    - by user176705
    I'm curious to know, when I bought a brand new laptop there are limitations to create or change the HDD partitions, except the following partitions: c:\ drive (Main partition + OS drive) NTFS. 400 Gb. Recovery drive NTFS. 15 Gb. Tools drive FAT32. 2 Gb. System drive NTFS. 0.3 Gb. My questions are: How do manufacturers restrict HDDs ? What is the term for these restrictions? Can this be applied to desktop PCs? Is it possible to modify the restrictions by an end-user?

    Read the article

  • Re: # 47209 How to copy an Existing HD to a new one and have it be bootable?

    - by user281151
    Help please! My backup method of choice is to clone my "working" drive to another identical drive. I have 2 windows drives and I clone my working one to the other one once per month. No problem - each will boot if I select it. Now with the lack of future support for XP, I am getting familiar with Ubuntu 14.04 LTS. I have it on one drive and I have a second identical drive that I want to be able to clone it to once/month. Not as necessary to do this with Ubuntu as with windows, I know, but I'm anal. So I have followed #47209 MestreLion's procedure with just the two Ubuntu drives "on line". I.e., boot my "working" drive with Live CD, use Gparted to be sure I know what's what, open terminal and enter and execute the dd command, Go to bed till the clone is done, shut down the computer, disconnect the input/source drive, boot up using BIOS to select the remaining output drive. The drive starts fine but all is not OK. It puts up a screen that says I'm on a Guest Session and asks for a password. Well, for one thing I have my Ubuntu set up to start without a password being entered. I have one, of course, I put it in but it isn't accepted. I can't get by this Guest Session screen. I am fine, of course. I can disconnect this drive, hook up my "main" ubuntu drive and all the rest, and go on with my business. But I don't have the desired "emergency backup" drive working where I could jump on and use it immediately if I needed it. Can someone give me some guidance here?? What (else) do I need to do. Love Ubuntu but learning. Thanks, Wes.

    Read the article

  • HD Tune warning for "Reallocated Event Count" with a new/unused drive. How serious is that?

    - by Developer Art
    I've just looked at the health status of my old 2,5 inch 500 Gb Fujitsu drive with a popular "HD Tune" utility. It shows a warning for the "Reallocated Event Count" property. How serious is that? The thing is that the drive is practically new. I pulled it out of a new laptop over a year ago and never used it since. Right now it only has 53 "Power On" hours which sounds about right since I only had it running a few evenings overnight before switching it for something more performant. Does this warning indicate that the drive is likely to fail some time in the future? I'm somewhat perplexed since the drive is effectively unused. What is more, I have arranged with somebody to buy off this drive since I don't really need. It is 12,5 mm thick (with 3 plates) meaning it doesn't fit into an external enclosure which makes it quite useless to me. Can I give away the drive without having it on my conscience or better cancel the deal? In other words, can the drive be used safely for years to come or better throw it away? I'm running a sector test now to see if there are any real problems. Will post the results as soon as they're available.

    Read the article

  • Warning for "Reallocated Event Count" S.M.A.R.T. attribute with a new/unused drive. How serious is t

    - by Developer Art
    I've just looked at the health status of my old 2,5 inch 500 Gb Fujitsu drive with a popular "HD Tune" utility. It shows a warning for the "Reallocated Event Count" property. How serious is that? The thing is that the drive is practically new. I pulled it out of a new laptop over a year ago and never used it since. Right now it only has 53 "Power On" hours which sounds about right since I only had it running a few evenings overnight before switching it for something more performant. Does this warning indicate that the drive is likely to fail some time in the future? I'm somewhat perplexed since the drive is effectively unused. What is more, I have arranged with somebody to buy off this drive since I don't really need. It is 12,5 mm thick (with 3 plates) meaning it doesn't fit into an external enclosure which makes it quite useless to me. Can I give away the drive without having it on my conscience or better cancel the deal? In other words, can the drive be used safely for years to come or better throw it away? I'm running a sector test now to see if there are any real problems. Will post the results as soon as they're available.

    Read the article

  • Optimizing sorting container of objects with heap-allocated buffers - how to avoid hard-copying buff

    - by Kache4
    I was making sure I knew how to do the op= and copy constructor correctly in order to sort() properly, so I wrote up a test case. After getting it to work, I realized that the op= was hard-copying all the data_. I figure if I wanted to sort a container with this structure (its elements have heap allocated char buffer arrays), it'd be faster to just swap the pointers around. Is there a way to do that? Would I have to write my own sort/swap function? #include <deque> //#include <string> //#include <utility> //#include <cstdlib> #include <cstring> #include <iostream> //#include <algorithm> // I use sort(), so why does this still compile when commented out? #include <boost/filesystem.hpp> #include <boost/foreach.hpp> using namespace std; namespace fs = boost::filesystem; class Page { public: // constructor Page(const char* path, const char* data, int size) : path_(fs::path(path)), size_(size), data_(new char[size]) { // cout << "Creating Page..." << endl; strncpy(data_, data, size); // cout << "done creating Page..." << endl; } // copy constructor Page(const Page& other) : path_(fs::path(other.path())), size_(other.size()), data_(new char[other.size()]) { // cout << "Copying Page..." << endl; strncpy(data_, other.data(), size_); // cout << "done copying Page..." << endl; } // destructor ~Page() { delete[] data_; } // accessors const fs::path& path() const { return path_; } const char* data() const { return data_; } int size() const { return size_; } // operators Page& operator = (const Page& other) { if (this == &other) return *this; char* newImage = new char[other.size()]; strncpy(newImage, other.data(), other.size()); delete[] data_; data_ = newImage; path_ = fs::path(other.path()); size_ = other.size(); return *this; } bool operator < (const Page& other) const { return path_ < other.path(); } private: fs::path path_; int size_; char* data_; }; class Book { public: Book(const char* path) : path_(fs::path(path)) { cout << "Creating Book..." << endl; cout << "pushing back #1" << endl; pages_.push_back(Page("image1.jpg", "firstImageData", 14)); cout << "pushing back #3" << endl; pages_.push_back(Page("image3.jpg", "thirdImageData", 14)); cout << "pushing back #2" << endl; pages_.push_back(Page("image2.jpg", "secondImageData", 15)); cout << "testing operator <" << endl; cout << pages_[0].path().string() << (pages_[0] < pages_[1]? " < " : " > ") << pages_[1].path().string() << endl; cout << pages_[1].path().string() << (pages_[1] < pages_[2]? " < " : " > ") << pages_[2].path().string() << endl; cout << pages_[0].path().string() << (pages_[0] < pages_[2]? " < " : " > ") << pages_[2].path().string() << endl; cout << "sorting" << endl; BOOST_FOREACH (Page p, pages_) cout << p.path().string() << endl; sort(pages_.begin(), pages_.end()); cout << "done sorting\n"; BOOST_FOREACH (Page p, pages_) cout << p.path().string() << endl; cout << "checking datas" << endl; BOOST_FOREACH (Page p, pages_) { char data[p.size() + 1]; strncpy((char*)&data, p.data(), p.size()); data[p.size()] = '\0'; cout << p.path().string() << " " << data << endl; } cout << "done Creating Book" << endl; } private: deque<Page> pages_; fs::path path_; }; int main() { Book* book = new Book("/some/path/"); }

    Read the article

  • Windows and file system abstraction - how much does it matter where something comes from?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Bootcamp driver package includes file system drivers (right term?) that enable Windows to access the Mac OS HFS+ partition. AFAIK it's a read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Why is it bad to map network drives in Windows?

    - by Beeblebrox
    There has been some spirited discussion within our IT department about mapping network drives. In particular, it has been said that mapping network drives is A Bad Thing and that adding DFS paths or network shares to your (Windows Explorer/Libraries) Favourites is a far better solution. Why is this the case? Personally I find the convenience of z:\folder to be better than \\server\path\folder', particularly with cmd line and scripting (of course I'm not talking about hard-coded links, naturally!). I have tried searching for pros and cons of mapped network drives, but I haven't seen anything other than 'should the network go down, the drive will be unavailable'. But this is a limitation of any network-accessed storage... I have also been told that mapped network drives poll the network when the network resource is unavailable, however I haven't found more information on this. Wouldn't this still be an issue with other network access mechanisms (that is, mapped Favourites) whenever Windows tries to enumerate the file system (for example, when a file/folder picker dialog is opened)? -- Do network drives poll the network any more than a Windows Explorer library/favourite?

    Read the article

  • Why can't I mount an image hosted on a read-only HFS+ partition via Boot Camp?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Boot Camp driver package includes file system drivers that enable Windows to access the Mac OS HFS+ partition. It's read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Why do disk images hosted on a read-only HFS+ partition behave differently?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Boot Camp driver package includes file system drivers that enable Windows to access the Mac OS HFS+ partition. It's read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Formatting a memory stick with two partitions?

    - by Marius
    I have a 16GB memorystick which used to have a Linux partition. It therefore has two partitions; 2GB FAT32 and 14GB linux boot drive. The linux part stopped working, so I decided to reinstall it. But windows can't see that partition. I tried formatting the whole disk, but I can only format one partition (the FAT32). There seems to be no way to combine the two partitions into one big one, and there seems to be no way for windows to partition the large part of the memorystick to but Linux on it. In the windows partition manager, windows sees the large unused partition, and it let me delete it. But once I have deleted it, I'm not allowed to format it. Also I cannot delete or resize the small partition. So, to summarize: I have a memorystick with two partitons. Windows only sees one of them, and won't let me use the other one. I would like to combine the two partitions so I can install Linux on the memory stick again.

    Read the article

  • USB Device Not Recognized (Mac)

    - by Nargis
    Fortunately, my Mac-pro also made one of my USB storage devices inoperable. My data loss in that USB device but such as another USB device and USB keyboard are unaffected. I have heard that my friend usually trigger this problem by having at least two devices plugged in - typically thumb drives/USB flash drives, and then once a second flash drive is plugged in that become unrecognized. I have only two USB ports and first I think port loose when I connect two USB devices. But later I found these hidden files (“.Spotlight-V100”, “.TemporaryItems”, “.Trashes”, and “._.Trashes”) are created by Mac OS. And before unrecognized that USB device I have deleted these files and my friend had also done the same action. Now I don’t want to test for next USB device to become unrecognized and I won’t deleted any hidden system file inside the flash drives. But I really want to know why these problems happened. Can I delete these hidden files when I only connect to virtual machine (Vista), because I used to delete all useless hidden files from USB flash drives? Any suggestions or thoughts to prevent this or alternative suggestions to fix the problem that take lossless would be much appreciated.

    Read the article

  • Connecting to a network drive programmatically and caching credentials

    - by Chris Doggett
    I'm finally set up to be able to work from home via VPN (using Shrew as a client), and I only have one annoyance. We use some batch files to upload config files to a network drive. Works fine from work, and from my team lead's laptop, but both of those machines are on the domain. My home system is not, and won't be, so when I run the batch file, I get a ton of "invalid drive" errors because I'm not a domain user. The solution I've found so far is to make a batch file with the following: explorer \\MACHINE1 explorer \\MACHINE2 explorer \\MACHINE3 Then manually login to each machine using my domain credentials as they pop up. Unfortunately, there are around 10 machines I may need to use, and it's a pain to keep entering the password if I missed one that a batch file requires. I'm looking into using the answer to this question to make a little C# app that'll take the login info once and login programmatically. Will the authentication be shared automatically with Explorer, or is there anything special I need to do? If it does work, how long are the credentials cached? Is there an app that does something like this automatically? Unfortunately, domain authentication via the VPN isn't an option, according to our admin. EDIT: If there's a way to pass login info to Explorer via the command line, that would be even easier using Ruby and highline.

    Read the article

  • autoconf libtool library linker path incorrect (need drive-letter) for MinGW ld.exe in Cygwin

    - by Tam Toucan
    I use autoconf and when the target is mingw I was using the -mno-cygwin flag. This has been removed so I'm trying to using the mingw tool chain. The problem is the linker isn't finding my libraries /bin/sh ../../../libtool --tag=CXX --mode=link mingw32-g++ -g -Wall -pedantic -DNOMINMAX -D_REENTRANT -DWIN32 -I /usr/local/include/w32api -L/usr/local/lib/w32api -o testRandom.exe testRandom.o -L../../../lib/Random -lRandom libtool: link: mingw32-g++ -g -Wall -pedantic -DNOMINMAX -D_REENTRANT -DWIN32 -I /usr/local/include/w32api -o .libs/testRandom.exe testRandom.o -L/usr/local/lib/w32api -L/home/Tam/src/3DS_Games/lib/Random -lRandom D:\cygwin\opt\MinGW\bin\..\lib\gcc\mingw32\3.4.5\..\..\..\..\mingw32\bin\ld.exe: cannot find -lRandom To link this from the command line using the mingw linker the -L path needs the drive letter i.e mingw32-ld testRandom.o -LD:/home/Tam/src/3DS_Games/lib/Random -lRandom works. The -L path is generated from the makefile.am's which have LDADD = -L$(top_builddir)/lib/Random -lRandom However I can't find how to set top_builddir to a relative path or to start it with the drive letter (my autoconf skills are weak). As a tempoary "solution" I have removed the use of libtool. I could hack a $(DRIVE_LETTER) infront of every -L option, but I'd like to find something better.

    Read the article

  • Querying the size of a drive on a remote server is giving me an error

    - by Testifier
    Here's what I have: public static bool DriveHasLessThanTenPercentFreeSpace(string server) { long driveSize = 0; long freeSpace = 0; var oConn = new ConnectionOptions {Username = "username", Password = Settings.Default.SQLServerAdminPassword}; var scope = new ManagementScope("\\\\" + server + "\\root\\CIMV2", oConn); //connect to the scope scope.Connect(); //construct the query var query = new ObjectQuery("SELECT FreeSpace FROM Win32_LogicalDisk where DeviceID = 'D:'"); //this class retrieves the collection of objects returned by the query var searcher = new ManagementObjectSearcher(scope, query); //this is similar to using a data adapter to fill a data set - use an instance of the ManagementObjectSearcher to get the collection from the query and store it ManagementObjectCollection queryCollection = searcher.Get(); //iterate through the object collection and get what you're looking for - in my case I want the FreeSpace of the D drive foreach (ManagementObject m in queryCollection) { //the FreeSpace value is in bytes freeSpace = Convert.ToInt64(m["FreeSpace"]); //error happens here! driveSize = Convert.ToInt64(m["Size"]); } long percentFree = ((freeSpace / driveSize) * 100); if (percentFree < 10) { return true; } return false; } This line of code is giving me an error: driveSize = Convert.ToInt64(m["Size"]); The error says: ManagementException was unhandled by user code Not found I'm assuming the query to get the drive size is wrong. Please note that I AM getting the freeSpace value at the line: freeSpace = Convert.ToInt64(m["FreeSpace"]); So I know the query IS working for freeSpace. Can anyone give me a hand?

    Read the article

  • Boost Netbook Speed with an SD Card & ReadyBoost

    - by Matthew Guay
    Looking for a way to increase the performance of your netbook?  Here’s how you can use a standard SD memory card or a USB flash drive to boost performance with ReadyBoost. Most netbooks ship with 1Gb of Ram, and many older netbooks shipped with even less.  Even if you want to add more ram, often they can only be upgraded to a max of 2GB.  With ReadyBoost in Windows 7, it’s easy to boost your system’s performance with flash memory.  If your netbook has an SD card slot, you can insert a memory card into it and just leave it there to always boost your netbook’s memory; otherwise, you can use a standard USB flash drive the same way. Also, you can use ReadyBoost on any desktop or laptop; ones with limited memory will see the most performance increase from using it. Please Note:  ReadyBoost requires at least 256Mb of free space on your flash drive, and also requires minimum read/write speeds.  Most modern memory cards or flash drives meet these requirements, but be aware that an old card may not work with it. Using ReadyBoost Insert an SD card into your card reader, or connect a USB flash drive to a USB port on your computer.  Windows will automatically see if your flash memory is ReadyBoost capable, and if so, you can directly choose to speed up your computer with ReadyBoost. The ReadyBoost settings dialog will open when you select this.  Choose “Use this device” and choose how much space you want ReadyBoost to use. Click Ok, and Windows will setup ReadyBoost and start using it to speed up your computer.  It will automatically use ReadyBoost whenever the card is connected to the computer. When you view your SD card or flash drive in Explorer, you will notice a ReadyBoost file the size you chose before.  This will be deleted when you eject your card or flash drive. If you need to remove your drive to use elsewhere, simply eject as normal. Windows will inform you that the drive is currently being used.  Make sure you have closed any programs or files you had open from the drive, and then press Continue to stop ReadyBoost and eject your drive. If you remove the drive without ejecting it, the ReadyBoost file may still remain on the drive.  You can delete this to save space on the drive, and the cache will be recreated when you use ReadyBoost next time. Conclusion Although ReadyBoost may not make your netbook feel like a Core i7 laptop with 6GB of RAM, it will still help performance and make multitasking even easier.  Also, if you have, say, a memory stick and a flash drive, you can use both of them with ReadyBoost for the maximum benefit.  We have even noticed better battery life when multitasking with ReadyBoost, as it lets you use your hard drive less.  SD cards and thumb drives are relatively cheap today, and many of us have several already, so this is a great way to improve netbook performance cheaply. Similar Articles Productive Geek Tips Speed up Your Windows Vista Computer with ReadyBoostSet the Speed Dial as the Opera Startup PageAsk the Readers: What are Your Computer’s Hardware Specs?Understanding Windows Vista Aero Glass RequirementsReplace Google Chrome’s New Tab Page with Speed Dial TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems

    Read the article

  • help: cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that has contained my 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. I also tried to install on two older 250GB seagate drives in the same computer. During every attempt I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). Installation takes about 30 minutes on the 250GB drives, and about 60 minutes on the 3TB drive - not sure why. When I install on the 250GB drives, the install process finishes, the computer reboots (after the install DVD is removed), but I get a grub error 15. It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB swap 8GB /boot ext4 3TB / ext4 But I've also tried the following, just in case it matters: 100MB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue Note: I have two DVD burners in the system, hence the duplicate line above. The same install and reboot on the 250GB drives generates "grub error 15". Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04) HDD == seagate 1TB SATA3 @ 7200 rpm (current install 64-bit v10.04) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) The current ubuntu 64-bit v10.04 system boots and runs fine on a seagate 1TB.

    Read the article

  • Install Oracle Drive and TNS for Windows XP?

    - by David.Chu.ca
    I am building a box with Windows XP with some applications. One application requires connection to an Oracle database on remote. I have installed OracleXEClient.exe from Oracle download. The installation does install "Oracle Provider for OLE DB" driver. My problem is that I still cannot make connections to the remote Oracle db. The test I have done is to create a UDL file with Oracle provider OLE DB connection. The error message is: --------------------------- Microsoft Data Link Error --------------------------- Test connection failed because of an error in initializing provider. ORA-12154: TNS:could not resolve the connect identifier specified I think I may miss TNSNAMEC.ora in the box. I can find this file from another box where Oracle connection works fine. I am not sure what package I should install (from Oracle) so that the default TNSNAEMES.ora will be installed with related files and setup path for accessing the TNS file?

    Read the article

  • Get-ChildItem fails to connect in SQLSERVER drive

    - by Norman Kelm
    I'm having some trouble with the SQLSERVER PSDRIVE. See error below. I only have named instances on my PC, both 2005 and 2008 Added the SQL snapins. The PC is named YODA The SQL instance is SQL2008 Navigate to the Databases folder for YODA\SQL2008. You can see the path below. dir -name spits out a connection error trying to connect to YODASQL2008\DEFAULT when it should be trying to connect to YODA\SQL2008. Then it outputs the db name which is Twitter in this case. Is there something missing from my config? Output: PS SQLSERVER:\SQL\YODA\SQL2008\Databases dir -name Get-ChildItem : SQL Server PowerShell provider error: Could not connect to 'YODASQL2008\DEFAULT'. [Failed to connect to server YODASQL2008. -- A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)] At line:1 char:4 + dir <<<< -name + CategoryInfo : OpenError: (SQLSERVER:\SQL\...tabases\Twitter:SqlPath) [Get-ChildItem], GenericProviderException + FullyQualifiedErrorId : ConnectFailed,Microsoft.PowerShell.Commands.GetChildItemCommand Twitter Repeats with error for every database. Thanks, Norman

    Read the article

  • Remote NX login to Ubuntu, Gnome can't mount CD/DVD drive

    - by T.J. Crowder
    Even though I'm sitting next to it, I log into my Ubuntu 10.04 LTS system via NX Free Edition from another system at the moment (this is temporary, not worth buying a KVM for). Curiously, though, when I do that Gnome's auto-mounting fails for CD/DVD media (I haven't tried other kinds) with a "Not Authorized" error. For instance here's what happens when I put the Ubuntu 10.04 LTS installation CD in: This does not happen if I log into it locally (not via NX) with the same user account. When using NX, I can mount the media if I go to mount directly: tjc@midnight:~$ sudo mkdir /media/dvd tjc@midnight:~$ sudo mount -r -t iso9660 /dev/sr0 /media/dvd tjc@midnight:~$ ls /media/dvd autorun.inf casper dists install isolinux md5sum.txt pics pool preseed README.diskdefines ubuntu wubi.exe ...which, along with the "not authorized" error, suggests some kind of permissions problem to me (doh). What I find odd is that the same user is involved in both cases (local and via NX). I'm new to Ubuntu on the desktop (used it and other distros on servers for years), so I'm afraid I don't know how this auto-mounting is happening. I think it's handled by the gvfs package and its daemon, but that's about as far as I got (and perhaps I've taken a left turn even getting that far). Although I can work around it with mount, does anyone know how I might get auto-mounting to work?

    Read the article

  • Can't copy files with 'additional permissions' to ext4 drive -- files that have @ after permissions,

    - by 99miles
    I am copying files from Snow Leopard to a mounted ext4 share via Samba, that's on a Fedora machine. Some files cannot be copied, and give this error: The operation can’t be completed because you don’t have permission to access some of the items. I've noticed that the files that can't be copied have an @ at the end of their permissions whien I do 'ls -l' in the command line. For example, I can copy the second file but not the first: -rwxrwxrwx@ 1 miles staff 1448 May 14 22:55 test.txt -rw-r--r-- 1 miles staff 136 Apr 5 17:06 image.psd.zip From what I've found, the @ means the file has 'additional properties'. Does anyone know how I can resolve this issue so I can copy the files to the fileshare?? Thanks!

    Read the article

  • Why is this LTO4 Tape-drive not working

    - by Tim Haegele
    # modprobe mptsas # dmesg [ 4274.796796] scsi target7:0:0: mptsas: ioc1: delete device: fw_channel 0, fw_id 0, phy 0, sas_addr 0x50050763124b29ac [ 4274.939579] mptsas 0000:01:00.0: PCI INT A disabled [ 4280.934531] Fusion MPT SAS Host driver 3.04.12 [ 4280.934552] mptsas 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 4280.934692] mptbase: ioc2: Initiating bringup [ 4281.490183] ioc2: LSISAS1064E B3: Capabilities={Initiator} [ 4281.490203] mptsas 0000:01:00.0: setting latency timer to 64 [ 4293.555274] scsi8 : ioc2: LSISAS1064E B3, FwRev=011e0000h, Ports=1, MaxQ=277, IRQ=16 [ 4293.574906] mptsas: ioc2: attaching ssp device: fw_channel 0, fw_id 0, phy 0, sas_addr 0x50050763124b29ac [ 4293.576471] scsi 8:0:0:0: Sequential-Access IBM ULTRIUM-HH4 B6W1 PQ: 0 ANSI: 6 [ 4293.578549] st 8:0:0:0: Attached scsi tape st0 [ 4293.578550] st 8:0:0:0: st0: try direct i/o: yes (alignment 512 B) [ 4293.578577] st 8:0:0:0: Attached scsi generic sg5 type 1 # mt -f /dev/st0 status mt -f /dev/st0 status mt: /dev/st0: rmtopen failed: Input/output error # dd if=/dev/zero of=/dev/nst0 bs=1024 count=10 dd: opening `/dev/nst0': Input/output error I am running debian squeeze 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux Server is Fujitsu TX140 with Controller Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS Tape+Hardware is new.

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • Disk drive disappear in Windows Explorer

    - by Stan
    OS: WinXP SP3 When pressing win+E or open Windows Explorer, under My Computer, usually there are several disk drives. But somehow they just disappeared. If I type c:\ in address bar, then the disk will re-appear in the tree though. How to get them back? Thanks.

    Read the article

  • Linux machine can't find its tape drive

    - by Kyle Hodgson
    I have an older HP NetServer LPr with what is apparently a Symbios SCSI card connecting to a Quantum SuperLoader 3 that is DLT based. From time to time, we seem to lose the connection to the autoloader. It's usually due to flaky power, but not totally sure why; sometimes when this happens the Autoloader's LED's are orange and it needs to be power cycled. The annoying workaround currently is to reboot the machine. As it is our production VPN and DNS server in addition to being our backup server, this is less than optimal. In Debian (Sarge) is there not some command one can type to get the card to notice that it has the autoloader connected again? dcr1:/proc# grep -i symbios /proc/pci SCSI storage controller: LSI Logic / Symbios Logic 53c895 (rev 1). dcr1:/proc# uname -a Linux dcr1 2.4.27-3-686 #1 Tue Dec 5 21:03:54 UTC 2006 i686 GNU/Linux dcr1:/proc# mt status mt: /dev/tape: No such device dcr1:/proc# ls -l /dev/tape lrwxrwxrwx 1 root root 8 2007-02-07 16:01 /dev/tape -> /dev/st0 dcr1:/proc# That mt status command will show the actual st0 status when things are working correctly. The No such device message is usually the second clue that we need to reboot - the first clue is usually that the backups didn't run.

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >