Search Results

Search found 25196 results on 1008 pages for 'hard drive cache'.

Page 89/1008 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • HTC Android device mounted as USB drive is read-only unless I'm root

    - by Ian Dickinson
    When I connect my HTC Incredible S to my Ubuntu 10.10 system as a USB drive, the device seems to mount OK, but is read-only unless I access it as root. For example, if I run nautilus, I can't drag and drop files to the SD-card in the phone, but if I run sudo nautilus I can. I have USB debug support set on the phone (Applications > Development > USB debugging) and I have added a rule for the device in /etc/udev/rules.d/51-android.rules on my Ubuntu system. Any suggestions as to how I can mount the drive so that I can copy content to the SD card without needing to sudo? Update Following advice from waltinator, I added the following line to my /etc/fstab: UUID=3537-3834 /media/usb1 vfat rw,user,noexec,nodev,nosuid,noauto However, the Android device is still being auto-mounted on /media/usb1 with uid and gid root. Update 2 syslog output: Nov 21 23:38:40 rowan-15 usbmount[4352]: executing command: mount -tvfat -osync,noexec,nodev,noatime,nodiratime /dev/sdd1 /media/usb1 Nov 21 23:38:40 rowan-15 usbmount[4352]: executing command: run-parts /etc/usbmount/mount.d

    Read the article

  • Ubuntu installation does not recognize drive partinioning

    - by Woltan
    I have a 1TB drive and installed Windows 7 on a 128GB partition. When I now try to install Ubuntu 11.04 it does not recognize the Windows partition but offers the complete 1TB drive to install Ubuntu on instead. It displays: However, in the Ubuntu Disk Utility the Windows partitions are recognized. What do I need to do in order for Ubuntu to recognize the Windows 7 partition and install Ubuntu as a dual boot? Response to comments The following commands were executed and the results are shown below: fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x34a38165 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 16318 130969600 7 HPFS/NTFS Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x14a714a6 Device Boot Start End Blocks Id System /dev/sdb1 1 60801 488384001 83 Linux parted -l Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: /dev/sr0: unrecognised disk label

    Read the article

  • Repair ext4 filesystem on USB drive

    - by phineas
    Yet another filesystem question. I wanted to use a USB drive that I hadn't mounted for a month or so and was surprised by the fact Ubuntu was unable to mount it. I looked it up in the disk utility and it said it discovered a device with 17 MB instead of 2 GB. The hardware looks intact, I hope for the best for repairing the ext4 filesystem. I followed the instructions from HOWTO: Repair a broken Ext4 Superblock in Ubuntu, but I wasn't successful. # fsck.ext4 -v /dev/sdb e2fsck 1.42.5 (29-Jul-2012) ext2fs_open2: Bad magic number in super-block fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/sdb The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Filesystem blocks are invalid, however when I run the recommended solution to try the alternate superblock, I get the following output: # e2fsck -b 8193 /dev/sdb e2fsck 1.42.5 (29-Jul-2012) e2fsck: Invalid argument while trying to open /dev/sdb plus the same error message as in the last paragraph above. Any ideas how to recover the drive? Thank you very much! Edit: testdisk won't help. I'm still stunned why the tools only discover 17 MB.

    Read the article

  • How to move ubuntu 12.04 on another drive

    - by Maksim
    How I can move my ubuntu on another drive? I know about clonezilla but problem is that destination drive is smaller the source one. Gparted can't copy-paste partition if destination not the end last partition. I tried dpkg --selected-packages and apt-clone. First one just not install all my packages and removed existed that now I have no full unity and not my all packages. Second one just fail on configuration package. But before I did that way I copy-paste my /etc to new system. My partition table destination : gpt 1 1049kB 106MB 105MB fat32 EFI System ??????????? 2 106MB 12,1GB 12,0GB ext4 3 12,1GB 66,3GB 54,2GB ext4 source: msdos 1 1049kB 12,0GB 12,0GB primary ext4 ??????????? 2 12,0GB 492GB 480GB primary ext4 3 492GB 500GB 8107MB primary linux-swap(v1) Gpt not working with ubuntu that use grub 1.99. I don't know why but my laptop can't boot any device with uefi just black screen and ubuntu detect it on fresh install.

    Read the article

  • ubuntu 12.04 installer does not recognize drive partitions

    - by endless forms
    I recently purchased a new HP Pavilion HPE desktop running Windows 7. I am trying to install a dual-boot system with 12.04. However, when I run the LiveCD I only get as far as the "Install" window where you can select the partitions for your drives. On the bottom where it says "device for boot loader installation" I have "/dev/sda" and cannot select any other devices. All the options to change the drives are greyed out, most likely because there are no drives in the window. I partitioned my largest drive using the tools within Windows, then booted into the CD, but nothing shows up. I then used Gparted to change the new space from unallocated to an /ext2, and still nothing shows up. The installer does not recognize anything, but when I go into an Ubuntu session and use the disk utility manager I can see the partitions I made. Anything I do has to be done outside of the installer. I have no files on this new computer, so this is the perfect time to install a parallel OS. I would like avoid completely reinstalling Windows, however. I've been over the forums many times, but all the answers I've found have not worked for me. I also tried flagging the new, empty partition as boot, but that screwed Windows up. Also, the WUBI installer hits the same point and quits. I know that the disk itself is fine because I just made another dual boot system on a Gateway PC. This makes me think something within this computer is preventing the installer from "seeing" the drives. Any help would be much appreciated! Edit in response: The main part of the partitioning window shows no partitions, everything is blank. There is no way to add partitions, and all the buttons are useless. I've tried defragging my drive multiple times, and I also used the same disk to dual-boot another PC with no problems, so it's not the disk, it's definitely the computer.

    Read the article

  • How to mount a network drive?

    - by Relik
    Ok so I'm trying to set-up a home file server. I'm thinking about just setting it up as an FTP server, no particular reason other than I'm familiar with FTP and samba tends to be very frustrating. Basically the set-up I'm going for, is to be-able to create multiple user accounts for the server and restrict or allow access to specific folders on each user. FTP is the only way (that I know of) to accomplish a set-up like that. My question is how can I mount an FTP server as a drive in Ubuntu so that all my applications can access it just like any other driver or folder. An example would be downloading 12.10 via torrent when it comes out, I would like to be able to tell transmission to just download the file straight to my ftp server. I know how to do this in Windows, its actually very easy. But I cant figure it out in Ubuntu. I have tried using the "connect to server" option in nautilus, and it works, but it doesn’t give me the result I want, most applications don’t see the folder, while others can. Also I am open to options other than FTP if anyone has any suggestions. I've looked into FreeNAS but that doesn’t seem to allow me to control the user accounts the way I want to. Then after all is said and done I would still need a way to mount the shares as a drive in Ubuntu. The ability to mount network drives in windows is one of my favourite features, but seeing how Ubuntu is now my daily OS and has been for about 4 years, I really need a way to accomplish the same thing in Ubuntu. Also a GUI would be preferable, seeing as there will be multiple people using this server, I would like it to be as easy as possible. EDIT: this link here seems to be almost exactly what I'm wanting to do, if I could find a GUI that can do this ill be almost set. then I would just need to find a way to hide specific folders from certain users.

    Read the article

  • Lenovo ThinkPad W530 problem to activate the optical/DVD drive

    - by Marko Apfel
    Problem Sometimes my notebook shows the optical drive as power off: But the hint there is not changing this state. Solution By looking in the device manager you see the next problem: So open the properties via right mouse click. This gives you the hint to remove the drive first. “Windows cannot use this hardware device because it has been prepared for "safe removal", but it has not been removed from the computer. (Code 47)” Whether you select the comment by dragging the mouse over the the hidden part or pressing the button “Properties”. So we unplug and reinsert the ultrabay. If you think, now the system is working – you are wrong. Now the system is the meaning, that the ultrabay is unplugged. You could verify this by refresh the view in the device panel. Now there is no longer our device. Yet your great gig comes – unplug the ultra bay and reinsert it a second time! After this you could hear with a media inside, that the motor is really started and we have a working device What a difficult birth …

    Read the article

  • Why doesn't Firefox redownload images already on a page?

    - by vvo
    Hello, i just read this article : https://developer.mozilla.org/en/HTTP_Caching_FAQ There's a firefox behavior (and some other browsers i guess) i'd like to understand : if i take any webpage and try to insert the same image multiple times in javascript, the image is only downloaded ONCE even if i specifiy all needed headers to say "do no ever use cache". (see article) I know there are workarounds (like addind query strings to end of urls etc) but why do firefox act like that, if i say that an image do not have to be cached, why is the image still taken from cache when i try to re-insert it ? Plus, what cache is used for this ? (I guess it's the memory cache) Is this behavior the same for dynamic inclusion for example ? ANSWSER IS NO :) I just tested it and the same headers for a js script will make firefox redownload it each time you append the script to the DOM. PS: I know you're wondering WHY i need to do that (appending same image multiple times and force to redownload but this is the way our app works) thank you The good answer is : firefox will store images for the current page load in the memory cache even if you specify he doesnt have to cache them. You can't change this behavior but this is odd because it's not the same for javascript files for example Could someone explain or link to a document describing how firefox cache WORKS?

    Read the article

  • Serialization for memcached

    - by Ram
    I have this huge domain object(say parent) which contains other domain objects. It takes a lot of time to "create" this parent object by querying a DB (OK we are optimizing the DB). So we decided to cache it using memcached (with northscale to be specific) So I have gone through my code and marked all the classes (I think) as [Serializable], but when I add it to the cache, I see a Serialization Exception getting thrown in my VS.net output window. var cache = new NorthScaleClient("MyBucket"); cache.Store(StoreMode.Set, key, value); This is the exception: A first chance exception of type 'System.Runtime.Serialization.SerializationException' occurred in mscorlib.dll SO my guess is, I have not marked all classes as [Serializable]. I am not using any third party libraries and can mark any class as [Serializable], but how do I find out which class is failing when the cache is trying to serialize the object ? Edit1: casperOne comments make me think. I was able to cache these domain object with Microsoft Cache Application Block without marking them [Serializable], but not with NorthScale memcached. It makes me think that there might be something to do with their implementation, but just out of curiosity, am still interested in finding where it fails when trying to add the object to memcached

    Read the article

  • ubuntu doesn't boot without flash drive

    - by Kasisnu
    i just installed ubuntu 11.04 onto this netbook. I had to use a flash key . during the install , i tried putting ubuntu on a separate partition , but it kept showing a 'no root file system is defined'. i didn't really know what i could do to fix it , so i decided to install it alongside windows. i have a windows 7 installation which works perfectly fine. So , the installation goes through perfectly . i give both OS's 40 Gigs of space . The comp restarts and NOTHING !. The computer boots directly into windows. During the install it said i'm supposed to be prompted at boot, and nothing happened. Ubuntu partitioned the C: drive but this partition doesn't show up in windows. If i boot using the flash drive, it shows the partition with the ubuntu installation. I tried reinstalling but now i don't get the prompt to asking me to install ubuntu. Really confused ..

    Read the article

  • What's the best way to cache a growing database table for html generation?

    - by McLeopold
    I've got a database table which will grow in size by about 5000 rows a hour. For a key that I would be querying by, the query will grow in size by about 1 row every hour. I would like a web page to show the latest rows for a key, 50 at a time (this is configurable). I would like to try and implement memcache to keep database activity low for reads. If I run a query and create a cache result for each page of 50 results, that would work until a new entry is added. At that time, the page of latest results gets new result and the oldest results drops off. This cascades down the list of cached pages causing me to update every cache result. It seems like a poor design. I could build the cache pages backwards, then for each page requested I should get the latest 2 pages and truncate to the proper length of 50. I'm not sure if this is good or bad? Ideally, the mechanism I use to insert a new row would also know how to invalidate the proper cache results. Has someone already solved this problem in a widely acceptable way? What's the best method of doing this? EDIT: If my understanding of the MYSQL query cache is correct, it has table level granularity in invalidation. Given the fact that I have about 5000 updates before a query on a key should need to be invalidated, it seems that the database query cache would not be used. MS SQL caches execution plans and frequently accessed data pages, so it may do better in this scenario. My query is not against a single table with TOP N. One version has joins to several tables and another has sub-selects. Also, since I want to cache the html generated table, I'm wondering if a cache at the web server level would be appropriate? Is there really no benefit to any type of caching? Is the best advice really to just allow a website site query to go through all the layers and hit the database every request?

    Read the article

  • How can I test caching and cache busting?

    - by Nathan Long
    In PHP, I'm trying to steal a page from the Rails playbook (see 'Using Asset Timestamps' here): By default, Rails appends assets' timestamps to all asset paths. This allows you to set a cache-expiration date for the asset far into the future, but still be able to instantly invalidate it by simply updating the file (and hence updating the timestamp, which then updates the URL as the timestamp is part of that, which in turn busts the cache). It‘s the responsibility of the web server you use to set the far-future expiration date on cache assets that you need to take advantage of this feature. Here‘s an example for Apache: # Asset Expiration ExpiresActive On <FilesMatch "\.(ico|gif|jpe?g|png|js|css)$"> ExpiresDefault "access plus 1 year" </FilesMatch> If you look at a the source for a Rails page, you'll see what they mean: the path to a stylesheet might be "/stylesheets/scaffold.css?1268228124", where the numbers at the end are the timestamp when the file was last updated. So it should work like this: The browser says 'give me this page' The server says 'here, and by the way, this stylesheet called scaffold.css?1268228124 can be cached for a year - it's not gonna change.' On reloads, the browser says 'I'm not asking for that css file, because my local copy is still good.' A month later, you edit and save the file, which changes the timestamp, which means that the file is no longer called scaffold.css?1268228124 because the numbers change. When the browser sees that, it says 'I've never seen that file! Give me a copy, please.' The cache is 'busted.' I think that's brilliant. So I wrote a function that spits out stylesheet and javascript tags with timestamps appended to the file names, and I configured Apache with the statement above. Now: how do I tell if the caching and cache busting are working? I'm checking my pages with two plugins for Firebug: Yslow and Google Page Speed. Both seem to say that my files are caching: "Add expires headers" in Yslow and "leverage browser caching" in Page Speed are both checked. But when I look at the Page Speed Activity, I see a lot of requests and waiting and no 'cache hits'. If I change my stylesheet and reload, I do see the change immediately. But I don't know if that's because the browser never cached in the first place or because the cache is busted. How can I tell?

    Read the article

  • Recovering a website

    - by Jessica
    I found my website in the Wayback Machine a few months ago, but today I've tried again and now it tells me it can't find robots.txt. My old webhost stopped paying for their servers back in August without any notice. I was going to do a backup the day it happened. Is there a way just to find the text? I have the old IP, images, but nothing else. None of the big search engines have caches anymore, and I already looked in the cache of three of my Macs with nothing to be found.

    Read the article

  • How to recover a website's lost robot.txt?

    - by Jessica
    I found my website in the Wayback Machine a few months ago, but today I've tried again and now it tells me it can't find robots.txt. My old webhost stopped paying for their servers back in August without any notice. I was going to do a backup the day it happened. Is there a way just to find the text? I have the old IP, images, but nothing else. None of the big search engines have caches anymore, and I already looked in the cache of three of my Macs with nothing to be found.

    Read the article

  • How to ensure apache2 reads htaccess for custom expiry?

    - by tzot
    I have a site with Apache 2.2.22 . I have enabled the mod-expires and mod-headers modules seemingly correctly: $ apachectl -t -D DUMP_MODULES … expires_module (shared) headers_module (shared) … Settings include: ExpiresActive On ExpiresDefault "access plus 10 minutes" ExpiresByType application/xml "access plus 1 minute" Checking the headers of requests, I see that max-age is set correctly both for the generic case and for xml files (which are auto-generated, but mostly static). I would like to have different expiries for xml files in a directory (e.g. /data), so http://site/data/sample.xml expires 24 hours later. I enter the following in data/.htaccess: ExpiresByType application/xml "access plus 24 hours" Header set Cache-control "max-age=86400, public" but it seems that apache ignores this. How can I ensure apache2 uses the .htaccess directives? I can provide further information if requested.

    Read the article

  • Visitors have old website cached in their browsers

    - by RussianBlue
    My client's new website is example.com, the old website is example.co.uk. I've re-pointed the A Records to the new website (so as to leave the emails alone) and put in 301 redirects from old pages to new pages. But, my client is upset as he (and he thinks many of his clients) have the old website cached in their browsers and won't know how to clear their browser cache. Is there anything I can do to overcome this and if not, what sort of time will browsers finally stop using their cached pages so I can at least go back to my client and tell him that his clients will finally start to see the new website?

    Read the article

  • Motherboard HDDPWR1 connector

    - by Eric Leschinski
    I need help identifying the name of a connector. I have a Gateway DX4870-UB318 computer, I opened the case and wanted to attach another hard drive, but to my surprise one existing SATA hard drive was connected to the motherboard with this connector: And here is the spot on the Motherboard where the power was supplied. What is the name of this adapter and where can I get another one? Clues: This computer was bought new October 2013 from best buy, box number: DX4870-UB318. The gateway folks won't divulge the type of motherboard it has nor give specs on it. On the wire itself is an identification code: H.35090NJ01-000 Next to the connector on the motherboard it says: HDDPWR1 and the second one says HDDPWR2. This cable has two SATA power connectors and one mystery connector. The power supply has no molex power cables and no SATA power connectors! This is the most bizarre hard drive power system I've seen. I guess the motherboard folks are trying to remove the burden for desktop power supplies to provide adapters (molex, SATA, other) to CD's and hard drives. Can someone put a name on that white flat 6 pin HDD Power Connector? My Solution I can buy a "SATA Power Y Splitter Cable" to provide more spaces to power sata devices.

    Read the article

  • Visual Studio Setup and Deployment - D Drive

    - by JB_SO
    I'm creating a VS2005 setup and deployment installer and I need to create some folders in the D: drive since the customer has a partitioned their hard-drive. I've created some folders in the filesystem view and hard-coded the 'DefaultLocation' parameter to point to the D: drive. Now my question is, is it possible to put a 'Condition' parameter that will check to see if the D: drive on the system that the software is being installed is (or is not) a CD-Drive. Thanks

    Read the article

  • Migrate Windows Server 2008 to a new hard disk 2

    - by MainMa
    Hi, A few weeks ago, I already asked how to move a Windows Server 2008 to a new hard disk. Despite the previous answers and two weeks lost trying to do it, I am always unable to move the OS to the new drive. What I tried: A backup/restore using Windows Backup. This never helped. First, I tried to backup, then copy the backup to a new drive, then restore. This results in "The parameter is incorrect. (0x80070057)" error caused by a bug in Windows Backup. Recently, I attempted to backup to a network share, but I can't restore from it, because of a "*The network path was not found. (0x80070035)" error. Trying the netsh interface ipv4 set address [...] does not work neither (saw at least three different errors, mostly "The interface is unknown.") A previously suggested solution using imagex from Windows AIK results in a non-bootable disk after writing an image to it. When booting from Windows 2008 installation disk (from USB), it finds that the HDD is not bootable and proposes to fix this, but then crashes, resulting in an unbootable USB flash disk (and HDD stays unbootable). As I said in my previous question, doing a clone of a hard disk drive gives an (of course) bootable disk, but Windows complain about hardware changes and cannot start. Now can somebody suggest me another way to move Windows Server 2008 to a new hard disk? Is it at least possible to do, or any hard disk failure/change implements necessarily to reinstall the whole OS?

    Read the article

  • Acronis Disk Director AFTER Clone Disk error: PXE-E61: Media test failure, check cable

    - by Kairan
    Used Acronis Disk Director on my desktop, plugged in the laptop drive 240GB SSD (USB) and the new hard drive 500GB SSD (usb) and the copy seemed to be fine. I didnt see any error messages but I didnt stare at it for 3 hours either. The clone disk of course the Toshiba hidden restore partition, the primary partition C drive and the active (boot?) partition and yes, did check box for copy NT signature. The computer boots up fine most of the time, but it seems that when the computer goes to sleep (i believe its sleep, hard to do much testing during school) or hibernate or reboot it will sometimes display this message: Intel(R) Boot Agent GE v1.3.52 Copyright (C) 1997-2010, Intel Corporation PXE-E61: Media test failure, check cable PXE-M0F: Exiting Intel Boot Agent Insert system disk in drive. Press any key when ready... Of course any key does nothing but repeat a similar method. However, if I press the power button on the laptop (Toshiba Portege R705, Win 7 Pro 64-bit) it puts computer into hibernate. After hibernating I press power button again and it comes out of hibernation without any odd messages or problems described above... so apparently that is my TEMP fix. Another recent issue I noticed is on occasion when creating a new folder or modifying something in the system variables, other random areas I will get a message: "The Stub received bad data" and simply retry the task and it works. Perhaps these two issues are linked.

    Read the article

  • Copying large files from USB devices to the internal hard drive fails on Mac OS

    - by John M. P. Knox
    I have a second-generation 13" MacBook Air running Mac OS X 10.6.6 with a 2.13 GHz processor, 4 GB of RAM, and a 256 GB SSD hard disk. I often get failures when I attempt to copy a large file or large collection of files from an external USB drive (typically a "Firewire" generation Drobo) to the internal drive. The failure behaves almost exactly as if I had pulled the USB cable from the computer in mid-transfer. I get a warning that I have removed the hard disk improperly. After this event, the drive no longer appears mounted in the finder, and I have to unplug and reinsert the USB cable to mount the drive again. I have also seen a similar problem when using Aperture 3 to import a large number of photos and videos from a USB Compact Flash card reader. The import will fail and I will have to unplug the Card Reader and import the missing items. Oddly, reversing the direction of the copy seems pretty reliable. I've never had a problem copying a large file to a USB device, meaning that I have quite a few large files which are stranded on my Drobo. Model Identifier: MacBookAir3,2 Boot ROM Version: MBA31.0061.B01 I have seen a similar issue reported on Apple's website: http://discussions.apple.com/thread.jspa?threadID=2648590&tstart=0 The only suggested resolutions there seems to be switching to another form of connectivity (e.g. firewire, which does not exist on MacBook Air), or downgrading to Mac OS 10.6.4, or reverting the USB kernel extensions to the 10.6.4 versions: http://discussions.apple.com/message.jspa?messageID=12566073#12582956 I'm not too keen on the idea of downgrading kernel extensions. Does anyone know of a hardware revision without this issue that I can trade up to? Are there any other potential solutions out there?

    Read the article

  • How to configure SCSI hard drives and RAID for Poweredge 2850 web server

    - by Saul
    I'm trying to set up a Poweredge 2850 as a web server, but as a server novice it's causing me some confusion. Its a virgin install so no data to be lost as yet, so I'd like to get the best arrangement for setting up Windows Server 2008. The box will run IIS, a mail and FTP server. The current physical arrangement of the hot swap drives is 1 73GB 3 146GB 5 blank 0 73GB 2 146GB 4 146GB (but flashes green, amber off) When I enter the PERC config screens on boot up I've got Raid Ch- 0 ID 0 ONLIN A00-00 1 ONLIN A00-01 2 ONLIN A01-00 3 ONLIN A01-01 4 HOTSP I think that drives 0 and 1 are set to RAID 1 and drives 2 and 3 are also set to RAID 1, certainly I can see 2 logical drives, both raid 1 of 69880MB and 139900MB Now what I think I am getting here is that the 2 73GB drives mirror each other and the 2 146GB drives mirror 2? so by my noob thinking if a drive fails I can pull it, insert and new one and it will reduplicate from its matching pair? I think the flashing amber probably indicates a failing drive in slot 4, should that just be binned? What confuses me coming from a home user XP background is that when I load up Windows Server 2008 OS under my computer I only see a C drive of about 70GB capacity. i.e wheres the 146GB drive? Any advice appreciated

    Read the article

  • Macbook Pro won't boot from DVD with SSD

    - by Adam Carr
    Here's the timeline of events. Had a running MBP 17 Early 2011 Thunderbolt with OWC Mercury Extreme Pro SSD 115GB drive. Installed Windows 7 via bootcamp. I have done this multiple times before and every time I need to format the bootcamp partition before installing. I think this time I actually deleted the partition and then selected the freespace to install. This worked fine for the most part but I wasn't able to boot the boot camp partition using vmware fusion. I gave up and used the boot camp assistant to revert back to one mac partition. I was getting some odd behavior so I rebooted the machine. It then came up with a message saying no bootable partiton. This made me think (and still does) that the windows install using the free space versus the boot camp partition caused the windows MBR boot loader to get installed incorrectly and mucked up the OS X installation. Ok, fine, I can just reinstall. I can't seem to boot from the original MBP installation DVD. I hold down c on boot but I never get past the all grey screen. I hear the DVD drive spin up but it eventually stops. I put the original HD back in it and everything works fine but when I put the SSD in, I can't boot from the DVD drive. I have already set up an RMA with OWC to send back the drive but considering the order of events, I feel as though it isn't a hardware issue but can't seem to figure out how to fix it. I can always send it back in but figured I would check and see if anyone could offer some guideance/assistence before doing so.

    Read the article

  • Windows 7: Setting up backup to an external hard drive on another computer on the network

    - by seansand
    I have an external hard drive connected to a Windows 7 (Home Edition) computer. I have another computer (with Windows 7 Ultimate), and I want to have the Windows 7 Ultimate back up to the same external hard drive, without having to disconnect and move the external hard drive from the Home Edition PC. When I get to the "Set up backup" dialog within Windows 7, it asks me where to save the backup. I select "Save on a network". However, when I enter "\\computername\harddrivename" under Browse, the "OK" button remains grayed out. The button remains grayed out unless I also enter a Username and Password under "Network credentials". However, the account I have on the other computer doesn't have a password for it. To un-gray out the button I must enter a fake password, allowing me to click "OK", but then obviously I get a "bad password" error. Does anyone know how to get around this problem? (Seems kind of ridiculous.) I made sure that the security settings with the external hard drive on the other computer are full access to Everyone, so permissions is not the problem. I also thought about using Homegroup instead of the regular security settings, but there is no obvious way to go about it that way, either.

    Read the article

  • scan partition for bad blocks

    - by user22559
    Hello everyone I have a hard disk with bad sectors on it. I want to partition the drive so that the partitions are in the good part of the hard disk, and the parts that have bad sectors are not used. The first ~20GB of the hard disk are good. Then comes a ~13GB part that is riddled with bad sectors. After that, the hard disk is good again, but at the very end there is a ~2GB part with bad sectors. I have used an app called "Hdtune" to get this information, and I have created a 19GB c: partition at the beginning of the drive, then skipping the 13GB of bad sectors, then creating the D: partition that spans the rest of the disk, minus the last 2GB. The C: partition works well (i have been using it for a month and i have got no error whatsoever), but the D partition has been giving me problems. Somehow, it seems that I have some bad sectors in the D: partition. I am looking for an app that scans the HDD, finds the bad blocks, and shows them in a map so I can see if they are in the D partition. Or, an app that scans only a specified partition for bad sectors, and then shows in a map where the bad sectors are in the partition. I want to know this so I can resize the D partition so that it is outside of the bad area of the disk.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >