Search Results

Search found 10679 results on 428 pages for 'scan disk'.

Page 48/428 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Script to copy files on CD and not on hard disk to a new directory

    - by John22
    I need to copy files from a set of CDs that have a lot of duplicate content, with each other, and with what's already on my hard disk. The file names of identical files are not the same, and are in sub-directories of different names. I want to copy non-duplicate files from the CD into a new directory on the hard disk. I don't care about the sub-directories - I will sort it out later - I just want the unique files. I can't find software to do that - see my post at SuperUser http://superuser.com/questions/129944/software-to-copy-non-duplicate-files-from-cd-dvd Someone at SuperUser suggested I write a script using GNU's "find" and the Win32 version of some checksum tools. I glanced at that, and have not done anything like that before. I'm hoping something exists that I can modify. I found a good program to delete duplicates, Duplicate Cleaner (it compares checksums), but it won't help me here, as I'd have to copy all the CDs to disk, and each is probably about 80% duplicates, and I don't have room to do that - I'd have to cycle through a few at a time copying everything, then turning around and deleting 80% of it, working the hard drive a lot. Thanks for any help.

    Read the article

  • How to load/save C++ class instance (using STL containers) to disk

    - by supert
    I have a C++ class representing a hierarchically organised data tree which is very large (~Gb, basically as large as I can get away with in memory). It uses an STL list to store information at each node plus iterators to other nodes. Each node has only one parent, but 0-10 children. Abstracted, it looks something like: struct node { public: node_list_iterator parent; // iterator to a single parent node double node_data_array[X]; map<int,node_list_iterator> children; // iterators to child nodes }; class strategy { private: list<node> tree; // hierarchically linked list of nodes struct some_other_data; public: void build(); // build the tree void save(); // save the tree from disk void load(); // load the tree from disk void use(); // use the tree }; I would like to implement the load() and save() to disk, and it should be fairly fast, however the obvious problems are: I don't know the size in advance; The data contains iterators, which are volatile; My ignorance of C++ is prodigious. Could anyone suggest a pure C++ solution please?

    Read the article

  • XFS disk becomes unavailable after a while

    - by Guard
    Ubuntu 12.04 (but the same was on 11.10 before upgrading) WD MyBook, 2TB, no RAID (or RAID0, not completely sure, anyway no mirroring, both 1TB disks are in use, mounted as a single device). Formatted to XFS, normally used for big movie files. Connected to Firewire 800. At some point the LED started going up and down as when constantly reading/writing. The device gives access error. When unplugged (cable, then holding the power button for a while, then unplugging the power) and re-connected becomes available. xfs_check with no results. xfs_repair did something, but looks like didn't fix any error. Then after a massive read (checking 1.5GB torrent file for integrity) becomes unavailable again. Any ideas what's wrong? Drives? Cables? Motherboard? OS? UPD: not sure how relevant this is, but here are dmesg output [14380.632816] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled [14380.633356] SGI XFS Quota Management subsystem [14421.812220] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.890596] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.896858] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14453.895347] firewire_core: created device fw1: GUID 0090a99500a35518, S400, 9 config ROM retries [14453.904818] scsi6 : SBP-2 IEEE-1394 [14453.905014] scsi7 : SBP-2 IEEE-1394 [14454.139993] firewire_sbp2: fw1.0: logged in to LUN 0000 (0 retries) [14454.158769] scsi 6:0:0:0: Direct-Access WD My Book 1015 PQ: 0 ANSI: 4 [14454.159251] sd 6:0:0:0: Attached scsi generic sg3 type 0 [14454.162391] firewire_sbp2: fw1.1: logged in to LUN 0001 (0 retries) [14454.167453] sd 6:0:0:0: [sdc] 3907017568 512-byte logical blocks: (2.00 TB/1.81 TiB) [14454.178822] sd 6:0:0:0: [sdc] Write Protect is off [14454.178826] sd 6:0:0:0: [sdc] Mode Sense: 10 00 00 00 [14454.186830] scsi 7:0:0:1: Enclosure WD My Book Device 1015 PQ: 0 ANSI: 4 [14454.186995] scsi 7:0:0:1: Attached scsi generic sg4 type 13 [14454.190078] sd 6:0:0:0: [sdc] Cache data unavailable [14454.190087] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.202176] sd 6:0:0:0: [sdc] Cache data unavailable [14454.202185] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.239940] sdc: [mac] sdc1 sdc2 sdc3 sdc4 [14454.271262] sd 6:0:0:0: [sdc] Cache data unavailable [14454.271270] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.271354] sd 6:0:0:0: [sdc] Attached SCSI disk [14454.272149] ses 7:0:0:1: Attached Enclosure device [14606.090024] XFS (sdc3): Mounting Filesystem [14612.048343] XFS (sdc3): Starting recovery (logdev: internal) [14620.697636] XFS (sdc3): Ending recovery (logdev: internal) [14748.120957] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx [14748.120963] e1000e 0000:00:19.0: eth0: 10/100 speed: disabling TSO [14752.568382] uhci_hcd 0000:00:1a.0: PCI INT A disabled [14752.568579] uhci_hcd 0000:00:1a.1: PCI INT B disabled [14752.568738] ehci_hcd 0000:00:1a.7: PCI INT C disabled [14752.568779] ehci_hcd 0000:00:1a.7: PME# enabled [14752.584526] uhci_hcd 0000:00:1d.1: PCI INT B disabled [14752.584689] uhci_hcd 0000:00:1d.2: PCI INT C disabled [14752.680079] ehci_hcd 0000:00:1a.7: BAR 0: set to [mem 0xe4641000-0xe46413ff] (PCI address [0xe4641000-0xe46413ff]) [14752.680104] ehci_hcd 0000:00:1a.7: restoring config space at offset 0xf (was 0x300, writing 0x30b) [14752.680136] ehci_hcd 0000:00:1a.7: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900002) [14752.680170] ehci_hcd 0000:00:1a.7: PME# disabled [14752.680182] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.680190] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [14752.710334] uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [14752.710342] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [14752.749186] uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [14752.749194] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [14752.790231] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 22 (level, low) -> IRQ 22 [14752.790239] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [14752.829170] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.829178] uhci_hcd 0000:00:1d.2: setting latency timer to 64

    Read the article

  • Ignore errors when scanning for files in C:\

    - by Shane
    I am trying to search the C:\ drive for all files with a certain extension. I am using the following code which is working fine, however when it encounters an error the whole process stops rather than continuing with the scan. (running in backgroundworker, hence the invoke) Private Sub ScanFiles(ByVal rootFolder As String, ByVal fileExtension As String) 'Determine if the current folder contains any sub folders Dim subFolders() As String = System.IO.Directory.GetDirectories(rootFolder) For Each subFolder As String In subFolders ScanFiles(subFolder, fileExtension) Next For Each file As String In System.IO.Directory.GetFiles(rootFolder, fileExtension) lb.BeginInvoke(New AddValue(AddressOf AddItems), file) Next End Sub How can I make this code continue once an error is encountered? Thanks for your help.

    Read the article

  • Event on SQL Server 2008 Disk IO and the new Complex Event Processing (StreamInsight) feature in R2

    - by tonyrogerson
    Allan Mitchell and myself are doing a double act, Allan is becoming one of the leading guys in the UK on StreamInsight and will give an introduction to this new exciting technology; on top of that I'll being talking about SQL Server Disk IO - well, "Disk" might not be relevant anymore because I'll be talking about SSD and IOFusion - basically I'll be talking about the underpinnings - making sure you understand and get it right, how to monitor etc... If you've any specific problems or questions just ping me an email [email protected]. To register for the event see: http://sqlserverfaq.com/events/217/SQL-Server-and-Disk-IO-File-GroupsFiles-SSDs-FusionIO-InRAM-DBs-Fragmentation-Tony-Rogerson-Complex-Event-Processing-Allan-Mitchell.aspx 18:15 SQL Server and Disk IOTony Rogerson, SQL Server MVPTony's Blog; Tony on TwitterIn this session Tony will talk about RAID levels, how SQL server writes to and reads from disk, the effect SSD has and will talk about other options for throughput enhancement like Fusion IO. He will look at the effect fragmentation has and how to minimise the impact, he will look at the File structure of a database and talk about what benefits multiple files and file groups bring. We will also touch on Database Mirroring and the effect that has on throughput, how to get a feeling for the throughput you should expect.19:15 Break19:45 Complex Event Processing (CEP)Allan Mitchell, SQL Server MVPhttp://sqlis.com/sqlisStreamInsight is Microsoft’s first foray into the world of Complex Event Processing (CEP) and Event Stream Processing (ESP).  In this session I want to show an introduction to this technology.  I will show how and why it is useful.  I will get us used to some new terminology but best of all I will show just how easy it is to start building your first CEP/ESP application.

    Read the article

  • No Rush, Defragging that Drive can Wait [Humorous Image]

    - by Asian Angel
    That drive is only fragmented a little bit…nothing to worry about there. View a Larger Version of the Image You should defragment this volume. Ya think?! [via Fail Desk] What’s the Difference Between Sleep and Hibernate in Windows? Screenshot Tour: XBMC 11 Eden Rocks Improved iOS Support, AirPlay, and Even a Custom XBMC OS How To Be Your Own Personal Clone Army (With a Little Photoshop)

    Read the article

  • How can I prevent [flush-8:16] and [jbd2/sdb2-8] from causing GUI unresponsiveness?

    - by ændrük
    Approximately twice a week, the entire graphical interface will lock up for about 10-20 seconds without warning while I am doing simple tasks such as browsing the web or writing a paper. When this happens, GUI elements do not respond to mouse or keyboard input, and the System Monitor applet displays 100% IOWait processor usage. Today, I finally happened to have GNOME Terminal already open when the problem started. Despite other applications such as Google Chrome, Firefox, GNOME Do, and GNOME Panel being unresponsive, the terminal was usable. I ran iotop and observed that commands named [flush-8:16] and [jbd2/sdb2-8] were alternately using 99.99% IO. What are these, and how can I prevent them from causing GUI unresponsiveness? Details $ mount | grep ^/dev /dev/sda1 on / type ext4 (rw,noatime,discard,errors=remount-ro,commit=0) /dev/sdb2 on /home type ext4 (rw,commit=0) /dev/sda is an OCZ-VERTEX2 and /dev/sdb is a WD10EARS. Here is dumpe2fs /dev/sdb2, if it's relevant.

    Read the article

  • How do I free up more space in /boot?

    - by user6722
    My /boot partition is nearly full and I get a warning every time I reboot my system. I already deleted old kernel packages (linux-headers...), actually I did that to install a newer kernel version that came with the automatic updates. After installing that new version, the partition is nearly full again. So what else can I delete? Are there some other files associated to the old kernel images? Here is a list of files that are on my /boot partition: :~$ ls /boot/ abi-2.6.31-21-generic lost+found abi-2.6.32-25-generic memtest86+.bin abi-2.6.38-10-generic memtest86+_multiboot.bin abi-2.6.38-11-generic System.map-2.6.31-21-generic abi-2.6.38-12-generic System.map-2.6.32-25-generic abi-2.6.38-8-generic System.map-2.6.38-10-generic abi-3.0.0-12-generic System.map-2.6.38-11-generic abi-3.0.0-13-generic System.map-2.6.38-12-generic abi-3.0.0-14-generic System.map-2.6.38-8-generic boot System.map-3.0.0-12-generic config-2.6.31-21-generic System.map-3.0.0-13-generic config-2.6.32-25-generic System.map-3.0.0-14-generic config-2.6.38-10-generic vmcoreinfo-2.6.31-21-generic config-2.6.38-11-generic vmcoreinfo-2.6.32-25-generic config-2.6.38-12-generic vmcoreinfo-2.6.38-10-generic config-2.6.38-8-generic vmcoreinfo-2.6.38-11-generic config-3.0.0-12-generic vmcoreinfo-2.6.38-12-generic config-3.0.0-13-generic vmcoreinfo-2.6.38-8-generic config-3.0.0-14-generic vmcoreinfo-3.0.0-12-generic extlinux vmcoreinfo-3.0.0-13-generic grub vmcoreinfo-3.0.0-14-generic initrd.img-2.6.31-21-generic vmlinuz-2.6.31-21-generic initrd.img-2.6.32-25-generic vmlinuz-2.6.32-25-generic initrd.img-2.6.38-10-generic vmlinuz-2.6.38-10-generic initrd.img-2.6.38-11-generic vmlinuz-2.6.38-11-generic initrd.img-2.6.38-12-generic vmlinuz-2.6.38-12-generic initrd.img-2.6.38-8-generic vmlinuz-2.6.38-8-generic initrd.img-3.0.0-12-generic vmlinuz-3.0.0-12-generic initrd.img-3.0.0-13-generic vmlinuz-3.0.0-13-generic initrd.img-3.0.0-14-generic vmlinuz-3.0.0-14-generic Currently, I'm using the 3.0.0-14-generic kernel.

    Read the article

  • Cannot delete .Trash-503 directory, returns a $RECYCLE.BIN.trashinfo: Input/output error

    - by Parto
    I cannot delete .Trash-503 folder via GUI or terminal, it returns a $RECYCLE.BIN.trashinfo: Input/output error Not even sudo rm -r or even a simple ls works in that trash directory. Check terminal output below: subroot@subroot:~$ cd /media/xxxxx/ subroot@subroot:/media/xxxxx$ rm .Trash-503/ rm: cannot remove `.Trash-503/': Is a directory subroot@subroot:/media/xxxxx$ rm -r .Trash-503/ rm: cannot remove `.Trash-503/info/$RECYCLE.BIN.trashinfo': Input/output error rm: cannot remove `.Trash-503/info/found.000.trashinfo': Input/output error rm: cannot remove `.Trash-503/info': Directory not empty subroot@subroot:/media/xxxxx$ sudo rm -r .Trash-503/ [sudo] password for subroot: rm: cannot remove `.Trash-503/info/$RECYCLE.BIN.trashinfo': Input/output error rm: cannot remove `.Trash-503/info/found.000.trashinfo': Input/output error subroot@subroot:/media/xxxxx$ cd .Trash-503/ subroot@subroot:/media/xxxxx/.Trash-503$ ls info subroot@subroot:/media/xxxxx/.Trash-503$ cd info/ subroot@subroot:/media/xxxxx/.Trash-503/info$ ls ls: cannot access $RECYCLE.BIN.trashinfo: Input/output error ls: cannot access found.000.trashinfo: Input/output error found.000.trashinfo $RECYCLE.BIN.trashinfo subroot@subroot:/media/xxxxx/.Trash-503/info$ What's going on here and how can I delete this folder? EDIT I tried checking and repairing the partition using gparted only to get this error message: ERROR: Filesystem check failed! ERROR: 264 clusters are referenced multiple times. NTFS is inconsistent. Run chkdsk /f on Windows then reboot it TWICE! The usage of the /f parameter is very IMPORTANT! No modification was and will be made to NTFS by this software until it gets repaired. I don't have windows installed, how can I run chkdsk /f from ubuntu?

    Read the article

  • Where did my free space go?

    - by Ari B. Friedman
    I have a storage drive (2TB) and an OS drive (90GB SSD). I've run out of space on the OS drive: /$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb1 72G 72G 0 100% / udev 5.9G 12K 5.9G 1% /dev tmpfs 2.4G 1.2M 2.4G 1% /run none 5.0M 0 5.0M 0% /run/lock none 5.9G 428K 5.9G 1% /run/shm /dev/sda1 1.9T 639G 1.2T 37% /media/StorageDrive So be it. But when I attempt to figure out where the space has gone, I cannot find it anything remotely approaching the capacity of the drive: /$ sudo du -h -d 1 du: cannot access `./media/StorageDrive/home/ari/.gvfs': Permission denied 675G ./media 2.3G ./var 0 ./proc 7.0M ./tmp 27M ./boot 4.0K ./lib64 12K ./dev 44M ./home 16K ./lost+found 8.0M ./sbin 223M ./lib 4.0K ./selinux 1.4M ./run 140K ./root 8.8M ./bin 4.0K ./mnt 38M ./etc 8.0K ./srv 4.8G ./usr 65M ./opt 0 ./sys 682G . Note the difference between the total (682G) and the mounted drives in /media (675G) is only about 9G. How are 72G being used? Where is this dark matter hiding?

    Read the article

  • NFS server generating "invalid extent" on EXT4 system disk?

    - by Stephen Winnall
    I have a server running Xen 4.1 with Oneiric in the dom0 and each of the 4 domUs. The system disks of the domUs are LVM2 volumes built on top of an mdadm RAID1. All the domU system disks are EXT4 and are created using snapshots of the same original template. 3 of them run perfectly, but one (called s-ub-02) keeps on being remounted read-only. A subsequent e2fsck results in a single "invalid extent" diagnosis: e2fsck 1.41.14 (22-Dec-2010) /dev/domu/s-ub-02-root contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Inode 525418 has an invalid extent (logical block 8959, invalid physical block 0, len 0) Clear<y>? yes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/domu/s-ub-02-root: 77757/655360 files (0.3% non-contiguous), 360592/2621440 blocks The console shows typically the following errors for the system disk (xvda2): [101980.903416] EXT4-fs error (device xvda2): ext4_ext_find_extent:732: inode #525418: comm apt-get: bad header/extent: invalid extent entries - magic f30a, entries 12, max 340(340), depth 0(0) [101980.903473] EXT4-fs (xvda2): Remounting filesystem read-only I have created new versions of the system disk. The same thing always happens. This, and the fact that the disk is ultimately on a RAID1, leads me to preclude a hardware disk error. The only obvious distinguishing feature of this domU is the presence of nfs-kernel-server, so I suspect that. Its exports file looks like this: /exports/users 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/music 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/pictures 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/opt 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/users and /exports/opt are LVM2 volumes from the same volume group as the system disk. /exports/media is an EXT2 volume. (There is an issue where clients see /exports/media/pictures as being a read-only volume, which I mention for completeness.) With the exception of the read-only problem, the NFS server appears to work correctly under light load for several hours before the "invalid extent" problem occurs. There are no helpful entries in /var/log. All of a sudden, no more files are written, so you can see when the disk was remounted read-only, but there is no indication of what the cause might be. Can anyone help me with this problem? Steve

    Read the article

  • /tmp shows 690 Mb full, actual size 72 K, Why?

    - by Ankit
    Why is /tmp diretory on my system showing 690 Mb full, whereas du -sh /tmp shows only 72K full. drwxrwxrwt 2 lightdm lightdm 4096 Aug 29 21:49 at-spi2 drwx------ 2 ankit ankit 4096 Aug 29 21:50 keyring-0JTfoY drwx------ 2 ankit ankit 4096 Aug 29 21:44 keyring-rChLLL drwx------ 2 root root 16384 Jul 22 02:10 lost+found drwx------ 2 ankit ankit 4096 Jan 1 1970 orbit-ankit drwx------ 2 lightdm lightdm 4096 Aug 29 21:50 pulse-2L9K88eMlGn7 drwx------ 2 root root 4096 Aug 29 21:44 pulse-PKdhtXMmr18n drwx------ 2 ankit ankit 4096 Aug 29 21:50 pulse-zR1TZUAZfmQW drwx------ 2 ankit ankit 4096 Aug 29 21:44 ssh-dlslOXOq2203 drwx------ 2 ankit ankit 4096 Aug 29 21:50 ssh-MrQQVRyy3316 -rw------- 1 ankit ankit 0 Aug 29 21:45 tmp0qnNG4 -rw------- 1 ankit ankit 0 Aug 29 21:50 tmpVvSMt6 -rw------- 1 ankit ankit 0 Aug 29 21:49 tmpy9Gadz -rw-rw-r-- 1 lightdm lightdm 0 Aug 29 21:44 unity_support_test.0 ankit@duster:/tmp$ df -h df: `/home/ankit/.gvfs': Transport endpoint is not connected Filesystem Size Used Avail Use% Mounted on /dev/sda1 79G 11G 65G 14% / udev 2.9G 4.0K 2.9G 1% /dev tmpfs 1.2G 868K 1.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.9G 220K 2.9G 1% /run/shm /dev/sda7 38G 690M 35G 2% /tmp /dev/sda5 93G 26G 63G 30% /home /dev/sda6 93G 1.6G 87G 2% /boot /dev/sda3 154G 69G 78G 48% /home/mount_150 ankit@duster:/tmp$ ankit@duster:/tmp$ ankit@duster:/tmp$ sudo du -sh /tmp/ 72K

    Read the article

  • How do you force Ubuntu to unmount a disk when you press the eject button on an optical drive?

    - by Michael Curran
    When upgrading my hardware, I also upgraded to Ubuntu 10.10. On my previous system (with 10.04 and earlier) when I ejected a disk from the optical drive, the subfolder in the /media directory was automatically removed. In my new 10.10 system, if I don't eject the disk using the "eject" command within the system, the disk remains mounted, even after a new disk is installed. The new drive is a Blu Ray drive, but I haven't noticed any other problems from it. Normally, this isn't a problem, but it makes installing applications that are spread over multiple CDs more difficult in many cases (i.e. Wine). Any advice?

    Read the article

  • How can I make an unmounted / unmountable NTFS disk not show up in the nautilus devices area?

    - by Dennis
    I have an idea that my /etc/fstab is a real mish-mash and I don't remember how it got that way, first of all it looks like this UUID=9EB80807B807DD21 /media/Storage ntfs-3g users 0 0 UUID=a60397fd-964a-45b1-ad35-53c8a4bee010 / ext4 defaults 0 1 UUID=1764825d-b8ba-4620-b3b0-e979b6f4f5c4 swap swap sw 0 0 UUID=255DA1E406E29DBC /media/sda2 ntfs-3g defaults 0 0 UUID=2CCCF161CCF1262C /mnt/sda1 ntfs-3g umask=000 0 0 /dev/fd0 /media/floppy0 vfat noauto 0 0 I started with an old XP install on disk /dev/sda that I don't use anymore but didn't want to delete, so I shrunk the XP partition, added a NTFS partition that would be common to both systems (Labeled it "Common" in XP), then installed Lucid on an extended ext4 partition. On this disk of course the ext4 system partition comes up as /, the go between partition auto-mounts on /media/sda1 but shows up in Nautilus as COMMOM, while the XP system disk does not show up in Nautilus, but I can get to it by navigating to /mnt/sda1. A second hard drive (/dev/sdb) that I stuck in was already formatted NTFS with a bunch of stuff and labeled "Storage". It auto-mounts to /media/Storage but another un-mounted disk also shows up in the Nautilus device area called Storage but it can't be mounted (Here and in the "Places" are the only times it appears) I would primarily like this non-existant (or already mounted depending on how you look at it) disk to not show up, but I wouldn't mind an explanation of why one labeled partition auto-mounts to a /media mount point but shows up by label, one does not show up as mounted at all but mounts to a /mnt mount point and is there for navigation, and one is mounted to a directory of the same name as the label. I would love to have some consistancy / direction on what is proper in this circumstance. No doubt I caused this with the fstab but I really don't remember what my rational was if I edited it manually

    Read the article

  • How do you forcibly unmount a disk when you press the eject button on an optical drive?

    - by Michael Curran
    When upgrading my hardware, I also upgraded to Ubuntu 10.10. On my previous system (with 10.04 and earlier) when I ejected a disk from the optical drive, the subfolder in the /media directory was automatically removed. In my new 10.10 system, if I don't eject the disk using the "eject" command within the system, the disk remains mounted, even after a new disk is installed. The new drive is a Blu Ray drive, but I haven't noticed any other problems from it. Normally, this isn't a problem, but it makes installing applications that are spread over multiple CDs more difficult in many cases (i.e. Wine). Any advice?

    Read the article

  • "A hard disk may be failing" , but no additional info via gdu-notification-deamon (suggestions?)

    - by blunders
    Got this message after logging in: A hard disk may be failing One or more hard disk report health problems. Click the icon to get more information. On WUBI and 10.04, there was no icon to click, and clicking on made the message go away. After rebooting, the message did not display again. I've got everything on all my hard drives in duplicate, so not super worried about a disk failure, though I am wondering why the message had no info on which disk it thought had problems. Suggestions?

    Read the article

  • Wubi install: How do I increase swap size

    - by Diogenes Lantern
    I am trying to increase the swapfile size on my WUBI install. I followed the answer here: sudo su swapoff -a cd /host/ubuntu/disks/ mv swap.disk swap.disk.bak dd if=/dev/zero of=swap.disk bs=1024 count=2097152 mkswap swap.disk swapon -a free -m until I reached: mv swap.disk swap.disk.bak At which point I have got got the following: root@ubuntu:/host/ubuntu/disks# mv swap.disk swap.disk.bak mv: cannot move `swap.disk' to `swap.disk.bak': Operation not permitted My 256 M swap space is all used up. I would like to install a total of twice that. Is there a method of setting it which would not include guesswork on my part?

    Read the article

  • New HDD formating on Ext4 root permission

    - by Carlos Salmeron
    OK people good evening, I have this new 80Gb HDD I want to use it as a backup storage for my actual system (14.04) not a server. I formatted it with Gpart but I just can't write in it, when I search for permissions it tells me that only root users can write/create in it, log on as root user and try to change permissions, and I can't do that either. Long have I searched for an answer, looking everywhere but not to find any, is there a way to format it and use it with my user permission? Don't want it on NTFS, is there a way?, I have searched in these forums but there’s only an answer to format it in NTFS, so please. Thank you in advance.

    Read the article

  • Internal HDs that don't contain the OS aren't accessable unless I try to manually browse them

    - by Hrafn
    So I have 4 internal hard drives, one that contains the OS (Ubuntu 12.04), all ext4. After starting the computer up, and without having tried to access the drives (File manager, terminal etc) it seems like the drives haven't been mounted. If I go into the "Disks" utility I see that the disks haven't been mounted. Programs that try to access the HD's during startup throw an error. For example my music player can't find the library, my note taking software can't find the database etc. But after opening the drive in a file manager everything works. I've checked SMART on all the disks and everything is a ok. Any and all ideas would be appreciated.

    Read the article

  • Meaning of the free space indication in Deluge

    - by Tjae Beamon
    Recently I installed Ubuntu 12.0.4 using Wubi with my current Windows Vista. I have already installed all the 265 updates from the Ubuntu software center and downloaded Deluge from there. My hardrive is 80GB according to the disc usage analyzer. It also says 31.2 GB used and 47.8GB free. The confusion comes when I run Deluge. At the bottom it says 2.0GB free space. Is that 2.0GB just a size set from the torrent client and can be changed or am I limited to just that 2.0GB?

    Read the article

  • Nautilus statusbar visibilty - Quickly check free space

    - by Jeremy
    In prior versions, I would open Nautilus and check the statusbar, which would tell me how much free space there is. Now, the statusbar isn't shown by default. I know you can enable it from the View menu, but 99% of users won't do that (and I'd rather not do that, if possible). So, is there some new recommended way to keep tabs on hard drive usage? Or is there maybe some other method that I should have been using in the past but never noticed?

    Read the article

  • Benchmarking a file server

    - by Joel Coel
    I'm working on building a new file server... a simple Windows Server box with a few terabytes of disk space to share on the LAN. Pain for current hard drive prices aside :( -- I would like to get some benchmarks for this device under load compared to our old server. The old server was installed in 2005 and had 5 136GB 10K disks in RAID 5. The new server has 8 1TB disks in two RAID 10 volumes (plus a hot spare for each volume), but they're only 7.2K rpm, and of course with a much larger cache size. I'd like to get an idea of the performance expectations of the new server relative to the old. Where do I get started? I'd like to know both raw potential under different kinds of load for each server, as well an idea of what our real-world load looks like and how it will translate. Will disk load even matter, or will performance be more driven by the network connection? I could probably fumble through some disk i/o and wait counters in performance monitor, but I don't really know what to look for, which counters to watch, or for how long and when. FWIW, I'm expecting a nice improvement because of the benefits of having two different volumes and the better RAID 10 performance vs RAID 5, in spite of using slower disks... but I'd like to get an idea of how much.

    Read the article

  • What causes "A disk read error occurred, Press Ctrl + Alt + Del to restart"?

    - by Mehrdad
    I have a virtual machine containing Windows XP SP3. When I resized the VHD file (and the embedded partition), and tried booting, I got: A disk read error occurred Press Ctrl + Alt + Del to restart Some notes: FixBoot and FixMBR don't help. ChkDsk doesn't help. The partition is indeed active. The partition starts at sector 63 (it also did so before the problem) of cylinder 1, head 1, and is marked as type 0x07 (NTFS) My host OS reads the VHD and the partition completely fine I'm interested in knowing the cause rather than the fix. So "re-format the disk", "reinstall Windows", etc. aren't valid solutions. It's a virtual machine after all... I have nothing to lose, so I don't care about fixing it. I just want to know what's causing this problem, in case I run into it again on a physical machine (which I have done before). More info: The layout of the original, dynamic VHD (which works correctly): +-----------------------------------------------------------------------------+ ¦ Disk: 3 MBR/GPT: MBR ¦ ¦ Size: 127.00GB CHS: 16578 255 63 ¦ ¦ Sectors: 266338304 Disk Signature: 0xEE3EEE3E ¦ ¦ Partitions: 1 Partition Order: 1 ¦ ¦ Media Type: Fixed Interface: SCSI ¦ ¦ Description: Msft Virtual Disk ¦ +-----------------------------------------------------------------------------¦ ¦Pos Idx Type/Name Size Boot Hide Start Sector Total Sectors DL Vol Label ¦ +--- --- --------- ---- ---- ---- -------------- -------------- -- -----------¦ ¦ 1 1 07-NTFS 1.5G Yes No 63 3,148,677 F: <None> ¦ +-----------------------------------------------------------------------------+ The layout of the resized, fixed-size VHD (which doesn't work): +-----------------------------------------------------------------------------+ ¦ Disk: 3 MBR/GPT: MBR ¦ ¦ Size: 1.50GB CHS: 196 255 63 ¦ ¦ Sectors: 3149824 Disk Signature: 0xEE3EEE3E ¦ ¦ Partitions: 1 Partition Order: 1 ¦ ¦ Media Type: Fixed Interface: SCSI ¦ ¦ Description: Msft Virtual Disk ¦ +-----------------------------------------------------------------------------¦ ¦Pos Idx Type/Name Size Boot Hide Start Sector Total Sectors DL Vol Label ¦ +--- --- --------- ---- ---- ---- -------------- -------------- -- -----------¦ ¦ 1 1 07-NTFS 1.5G Yes No 63 3,148,677 F: <None> ¦ +-----------------------------------------------------------------------------+

    Read the article

  • Robocopy silently missing files

    - by John Hunt
    I'm using Robocopy to sync data from our server's hard disk to an external disk as a backup. It's a pretty simple solution but pretty much the best/easiest one we could come up with - we use two external disks and rotate them offsite. Anyway, here's the script (with the comments taken out) that I'm using to do it. It works very well, it's quick and almost 100% complete - however it's acting pretty strange with a few files (note company name has been changed in paths to protect the innocent): @ECHO OFF set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% SET prefix="E:\backup_log-" SET source_dir="M:\Company Names Data\Working Folder\_ADMIN_BACKUP_FILES\COMPA AANY Business Folder_Backup_040407\COMPANY_sales order register\BACKUP CLIENT FOLDERS & CURRENT JOBS pre 270404\CLIENT SALES ORDER REGISTER" SET dest_dir="E:\dest" SET log_fname=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.log SET what_to_copy=/COPY:DAT /MIR SET options=/R:0 /W:0 /LOG+:%log_fname% /NFL /NDL ROBOCOPY %source_dir% %dest_dir% %what_to_copy% %options% set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% cscript msg.vbs "Backup completed at %DATESTAMP% - Logs can be found on the E: drive." :END Normally the source would just be M:\Comapany name data\ but I altered the script a bit to test the problem. The following files in the source are not copied to the dest: Someclient\SONICP~1.DOC Someclient\SONICP~2.DOC Someclient\SONICP~3.DOC However, files in the same directory named: TIMESH~1.XLS TIMESH~2.XLS are copied. I'm able to open the files that aren't copied with no trouble at all, and they certainly weren't opened when I ran robocopy so it's not a locking issue. Robocopy is running as administrator so it's not a permissions issue. There's no trace these files were even attempted to be copied as there are no errors being output in the log or in my command prompt. Does anyone have any suggestions as to what this might be? Busted hard disk? Cheers, John.

    Read the article

  • How can I mount dd image of a partition?

    - by Puneet Arora
    I created a dd image of a partition (containing an HFS+ FS) of one of my disks (and not the entire disk) a few days ago using the following command - dd conv=sync,noerror bs=8k if=/dev/sdc2 of=/path/to/img How can I mount it? I tried the following but it doesn't work - mount -o loop,ro -t hfsplus /path/to/img /path/to/mntDir It gives me mount: wrong fs type, bad option, bad superblock on /dev/loop1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg | tail gives me - [5248455.568479] hfs: invalid secondary volume header [5248455.568494] hfs: unable to find HFS+ superblock [5248462.674836] hfs: invalid secondary volume header [5248462.674843] hfs: unable to find HFS+ superblock [5248550.672105] hfs: invalid secondary volume header [5248550.672115] hfs: unable to find HFS+ superblock [5248993.612026] hfs: unable to find HFS+ superblock [5248998.103385] hfs: unable to find HFS+ superblock [5249031.441359] hfs: unable to find HFS+ superblock [5249036.274864] hfs: unable to find HFS+ superblock Is there something wrong that I am doing? I tried searching on how to do this but all the results I get only talk about mounting a partition from within a full disk image, using the offset option with mount - none talk about the case where the image itself is that of a partition. Thanks. PS: I'm running 64bit Arch Linux, and the partition from the original disk /dev/sdc2 mounts fine.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >