Search Results

Search found 1833 results on 74 pages for 'floppy disks'.

Page 36/74 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • How to restruct RAID 10?

    - by user276851
    We would like to alter the partition without losing data. Here is the sketch of the plan. I am wondering if it is doable using mdadm; and if so, please kindly point some reference on how to carry out the following steps. For RAID 10, there are four disks used. (1 2)(3 4) The idea is to work on 1 and 3 while keeping 2 and 4 as backup. 1: break RAID 10 into two arrays of RAID 0 => (1 3) (2 4) (how to?) 2: re-partition and format (1 3) 3: copy data from (2 4) to (1 3) 4: re-partition and format (2 4) exactly as (1 3) 5: join (2 4) with (1 3) to form RAID 10 (how to?) Does it sound doable? Thanks a lot! Add: It looks like this guy (drumfile) is doing something similar, but lack of enough detail.

    Read the article

  • I cant acess my HDD

    - by user286283
    First of all, I'm a total Linux noob. I was running the latest version of Ubuntu but eventually I wasn't able to boot no more, the Ubuntu logo appeared and then it was only a blinking white bar indefinitely, without charging. Now, my problem is the following. I want to do a fresh install and I am trying to recover my latest files from the HDD using the Ubuntu live CD. I click on the HDD and obtain the following I honestly don't really know whats going on. Looking at disks I have this... Can anyone help me out? Cheers.

    Read the article

  • Fast Track Data Warehouse 3.0 Reference Guide

    - by jchang
    Microsoft just release Fast Track Data Warehouse 3.0 Reference Guide version. The new changes are increased memory recommendation and the disks per RAID group change from 2-disk RAID 1 to 4-Disk RAID 10. Memory The earlier FTDW reference architecture cited 4GB memory per core. There was no rational behind this, but it was felt some rule was better than no rule. The new FTDW RG correctly cites the rational that more memory helps keep hash join intermediate results and sort operations in memory. 4-Disk...(read more)

    Read the article

  • Ubuntu doesn't find hdds with higher clock rates

    - by user136243
    I dual boot windows 7 64-bit and ubuntu 13.10 64-bit on separate disks, and utilize some overclocking from the BIOS. Windows works fine, however ubuntu can't seem to find any hard drives, except for at stock cpu speeds. While attempting to boot it says Gave up waiting for root device... and ALERT! /dev/sdb7 does not exist. Dropping to shell! A bootable usb stick still works, but gparted doesn't detect any other drives. Have tried: Boot-repair Changing SATA mode in BIOS Newer kernels Older ubuntu versions Not sure it's relevant, but the motherboard is a Gigabyte GA-A75M-UD2H with the newest BIOS version, the CPU an AMD Llano. This is hardly a fatal error, but it's inconvenient to change BIOS settings whenever I want to switch OS, and furthermore I'm quite curious about why it won't work. I'd appreciate any insight into what the actual problem is. So how can I resolve this issue ?

    Read the article

  • assistance recovering/reinstalling/installing ubuntu and win7

    - by razzrat
    New computer with Windows 7 installed, I defrag, shrink, re-boot from Ubuntu LiveUSB, go to gparted and look at partitions before installing Ubuntu....for some reason Win7 is still taking up 400G of my HD! I resized partition down with gparted, exit and yes of course I can't boot into Windows. When I go to install Ubuntu in new large unallocated space I get a blank screen at the point you are asked what kind of installation you want. I have Ubuntu 12.04 LiveUSB, Windows 7 re-installation disk and driver disks also. The HDD currently has 3 allocated partitions: 'diag' fat16, 'recovery' ntfs and 'OS' ntfs which has a red '!' next to it.

    Read the article

  • I loose some directories when i upgrade from Ubuntu 11.10 to 12.04

    - by maythux
    last day i upgraded my ubunut 11.10 desktop to ubuntu 12.04. I was running a KVM virtual about 7 machines and managed by virt-manage software.... anyway when i finished upgrading i found that virt-manager is not working so i have to reconfigure it again and install some other missing packages that was deleted!!!! anyway i solve this issue...then i started to restore my virtual machines i restore 2 machines without any problems the third and fourth ones (windows) make a check disk that takes more that 6 hours but finally it works... other machines i cant find their attached hard disks i don't know what happens but i cant found that files. 1- upgrading delete files!?!! 2- Is there anyway to restore those files? thanks in advance

    Read the article

  • Installing linux on OCZ RevoDrive3 x2

    - by user2101712
    First of all, here is the configuration of my computer: Motherboard: Asus H87Plus RAM: Corsair Vengeance 32GB Processor: Intel i7 4770 Drive: OCZ RevoDrive 3 x2 (240 GB) (OCZ Revodrive3 is a PCIe module) I am trying to install the latest version of Ubuntu Desktop (13.10). The problem is that in the UEFI (bios) the drive shows up as a 240 GB drive, but in the Ubuntu installer it shows up as two 120 GB drives. If I install Ubuntu in any of these two drives, it never boots. The screen flickers a few times and comes back to the UEFI menu. I have tried reading up and have come across information that the drive has a "fakeraid", and the solution is to use dmraid. However, when I give the following commands in the terminal (from live CD): # modprobe dm_mod # dmraid -ay it says: no raid disks. And the following command: # ls -la /dev/mapper/ just shows /dev/mapper/control How can I install Ubuntu on my computer? what is the correct method?

    Read the article

  • boot loading problem after wubi installation of Ubuntu 12.04 on Win7

    - by user63085
    I was new in Ubuntu OS and I was just trying to use wubi windows installer to get a Ubuntu first for hands-on. I followed exactly the same as the instructions and after reboot win7, there is no Ubuntu selection in windows boot manager, with only Windows 7 showing there evilly -.- What I've found out was that the grub folder inside Ubuntu folder ( in my C:\ drive) was empty, either inside the ubuntu\disks\grub or ubuntu\install\grub. I thought this might be the reason why I could not load ubuntu during startup. Cause I've also looked into the EasyBCD settings, and ubuntu entry with Bootloader Path: \ubuntu\winboot\wubildr.mbr was lying there peacefully, looking perfectly fine. However it was not in boot loader actually. Is there a way to restore the grub folder with grub2, or is there any way to fix this problem so that I can find the "Ubuntu" selection at windows startup? Very appreciate your help :) Henry

    Read the article

  • Windows 7 can't boot with Ubuntu on different hard drive

    - by dellphi
    I use a dual boot with two hard disks and two OS is Ubuntu 10.04 and Windows 7. Windows 7 installed on the first disk, first partition. Grub is installed on a second hard disk MBR, and Ubuntu installed on an extended partition on a second hard drive. When I select Windows 7 on the Grub menu, the HDD lamp lights up briefly and then black screen on the monitor, with the status of the keyboard is still functioning. Until now (with the default boot from first HDD), I have to press F12 to get into the Grub to run Linux on a second HDD. output of fdisk -l grub.cfg. I want to retain Grub to remain on the second HDD, and Windows 7 could choose from the menu provided by Grub. But I do not get how, I hope anyone can help.

    Read the article

  • "Failed to mount Windows share" error in Samba

    - by Ranjith R
    This is the situation. There are 3 machines in the office. The Operating systems on them are respectively, Linux mint Ubuntu 12.04 Windows Vista The Ubuntu (#2) machine is supposed to be the common file server between the machines #1 and #3. Machine #2 has two hard disks. One is a 500 GB NTFS empty drive and the other is a 160 GB ext4 drive. My plan is to make the 500 GB as the file sharing disk. When I share a folder like ~/Documents using Nautilus context menu on machine #2, I can access the files easily on both #1 and #3, but when I try to share some folder on 500 GB disk, I get an error on machine #1 that says Failed to mount windows share I do not mind formatting the drive to ext4 if needed, but I am sure that something simple is wrong. EDIT I took @Marty's comment as a hint and used ntfs-config to configure automount of that partition. It is working now. Thanks

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • Oracle Exadata X3 Launch Webcast

    - by Cinzia Mascanzoni
    Available on-demand, this webcast covers everything your partners need to know about Oracle’s next-generation database machine. They will learn how to improve performance by storing multiple databases in memory, lower power and cooling costs by 30%, and easily deploy a cloud-based database service. Exadata X3 combines massive memory and low-cost disks to deliver the highest performance at the lowest cost. Partners won’t want to miss this webcast. Invite them to watch today! View and share the replay.

    Read the article

  • Xubuntu: how do I automatically mount external NTFS drive with writes allowed?

    - by user74372
    i would have thought mine was such a common question that there would be a simple solution already built in to xubuntu - but there isnt. i have 2 separate external hard disks and connect them to a usb port at different times - i would like them to be automatically mounted as read/write, but apparently the designers of gnome-volume-manager decided that shouldnt be possible and they are mounted as read-only in fact, i can write new files to them, but cannot then delete the new files i just wrote! is there a workaround? i read somewhere that etc/fstab doesn't apply to removable media, which are mounted by gnome-volume-manager and therefore cannot be unmounted by a user

    Read the article

  • Is it possible to lose some directories when upgrading from 11.10 to 12.04?

    - by maythux
    Last day I upgraded my Ubuntu 11.10 desktop to Ubuntu 12.04. I was running a KVM virtual about 7 machines and managed by virt-manage software. Anyway when I finished upgrading I found that virt-manager is not working. So I had to reconfigure it again and install some other missing packages that was deleted! Eventually, I managed to solve this issue. Then I started to restore my virtual machines. I restored 2 machines without any problems. The third and fourth ones (Windows) made a check disk that lasted more that 6 hours but finally it worked. Other machines I can't find their attached hard disks. I don't know what happened but I can't find those files. Does upgrading delete files? Is there anyway to restore those files?

    Read the article

  • GRUB's menu.lst deleted after a kernel update

    - by the_drow
    I have installed ubuntu through wubi and all was well until I updated to the next kernel version. I am now trying to boot into ubuntu and it shows me the GRUB rescue command line. I am able to boost windows and the problem seems to be related to the fact that I have no menu.lst on ubuntu\disks\boot\grub and also it might be related to the fact that wubi wasn't installed to the drive where windows is installed but I am not sure. How do I recover menu.lst? Does the problem lay somewhere else? Is there a way to read the data with a windows tool to just recover my data?

    Read the article

  • UbuntuStudio 12.04 does not boot after install - no "intrd" image

    - by user72705
    After installing Ubuntu Studio 12.04 from DVD onto the fourth hard disk, it fails to boot, even when explicitly choosing the fourth hard disk as the boot device. I have SUSE 11.2 on the first 2 SCSI disks (which form a RAID) and Studio64 on the 1st IDE disk (that is, the third disk). Looking at the /boot directory on the Ubuntu partition, I see there is no initrd image. Editing the GRUB configuration file to include (hd3,1)/vmlinuz and of course (hd3,1)/initrd should fix the problem. But still GRUB gives a file not found error. This appears to me that, no mkintrd during the booting process (checked with LiveCD) runs like in OpenSUSE. How do I create the initrd to make Ubuntu bootable.

    Read the article

  • Installing 12.10 on Dell r710

    - by user115490
    I have been trying to install Ubuntu 12.10 on a Dell r710. I have 6 disks in each 2 TB. I setup a raid 5 in the HW raid controller with all drives, and proceeded with the OS install. When the partitioner runs it fails to create / for some reason. I then tried just setting up a simple raid 1 with 2 drives in the HW controller, and doing the OS install again. The installer runs fine this time. Once trying to boot I then get a kernel panic of 'Target File System doesn't have requested /sbin/init' as well as a bunch of 'deleted inode referenced'. Can someone tell me what I am doing wrong here?

    Read the article

  • can't chmod on external hard disk?

    - by G. He
    I have an USB3.0 external hard disk, partitioned to 3 NTFS partitions. When I plug the hard disk in, the 3 partitions automatically mounted under /media. So far so good. I can read and write to files, or mkdir, etc on these partitions. But I can't do chmod/chown on any of the files/directories on these partitions. The owner:group always myself, and the mode are always 700 for directories and 600 for files. I have another partition on internal harddisk also mounted. That partition works fine. I looked the output of mount command, the only difference between mount options is that there is one extra 'default_permissions' on the external hard disks. Anyway I can set the owner:group and mode on these files and directories.

    Read the article

  • Ubuntu One lost most of my files!

    - by Max
    I keep in my UbuntuOne the source code of my projects as a back up. In the past it worked and I never had problem. It happened that I had to change the hard disk of my laptop and I installed a fresh Ubuntu 12.10 in the new one. First time I connected with UbuntuOne it downloaded my files, but when I went to see my projects almost all my c++ source code files were missing!!!... I tried to check if I still have them on UbuntuOne by accessing from the web but nothing... my work is lost forever. I don't know who to ask for help? Is there a way to get back my precious files? Honestly I can't trust this service any more, I'm very disappointed. (edit: thanks God I found a back up in one of my external hard disks. I won't trust Ubuntu One any more, it's buggy and quite slow. )

    Read the article

  • Make recovery disk for customers

    - by alexander7567
    I hear about other computer shops that sells customers recovery disks that they have created. I'm assuming all that they do is make a image and uses an automation script that allows this to be done. I have seen where clonezilla does this, but it has to be the same HDD size or they might have problems down the road. Is there any other freeware that I could do this with that allows you to use on any size disk. Ghost is really good for this because it automatically fills up empty space with the partition and never needs any user input or "Expert Mode" like clonezilla. But it is not freeware.

    Read the article

  • PowerShell script to find files that are consuming the most disk space

    As you know, SQL Server databases and backup files can take up a lot of disk space. When disk is running low and you need to troubleshoot disk space issues, the first thing to do is to find large files that are consuming disk space. In this article I will show you a PowerShell script that you can use to find large files on your disks. 12 essential tools for database professionalsThe SQL Developer Bundle contains 12 tools designed with the SQL Server developer and DBA in mind. Try it now.

    Read the article

  • Database Insider - November 2012 issue

    - by Javier Puerta
    The November issue of the Database Insider newsletter is now available. (Full newsletter here) Mark Hurd: Oracle Database Wrap-up from Oracle OpenWorld 2012 Oracle executives kicked off Oracle OpenWorld 2012, discussing the needs of customers, the brand-new Oracle Exadata Database Machine X3, and the latest Oracle Database innovations. (Read More) Webcast: Introduction to Oracle Exadata Database Machine X3 Oracle’s next-generation database machine, Oracle Exadata X3, combines massive memory and low-cost disks to deliver the highest performance at the lowest cost. Available in an eight-rack configuration, it allows you to start small and grow. Webcast: SAP Applications Run Better on Oracle Exadata Find out why a growing number of SAP application customers are turning to Oracle Exadata Database Machine for better performance, better productivity—and big savings. 

    Read the article

  • Having problem to login after upgrading from 11.10 to 12.04

    - by LinuxIsMyFriend
    I just update to 12.04 from 11.10 and I get to the login screen w/o problems. When I enter my password the screen turns black and returns to the login screen half a second later. There is a related question out there which was solved by creating more space on the disk but my disks (or rather partitions) are all below 30%. I can log in as guest. I can also login at the cmd prompt (going to tty with Alt+Ctrl+F1) with my normal user credentials. When logged in as guest I can also install programs using my normal account password. There is the normal authentication error when I mistype my password so I'm also sure the password works. Any suggestions?

    Read the article

  • AHCI, Power Mode S3 and HPET 64bit - please help with these settings!

    - by thetattoo
    my problem is the that for a Hackintosh (unofficial PC running OS X) I needed to set AHCI, Power Mode S3 and HPET 64bit. Now, I want to install Ubuntu (when 12.04 comes out) and was wondering if these settings will be right to use for Ubuntu. I read that AHCI is how the hard disks are accessed, Power Mode S3 is how suspend works with the RAM but couldn't figure out if using HPET 64bit makes any difference than HPET 32bit. What I would like is an explanation of what these BIOS settings do and what is best for Ubuntu. Thank you very much

    Read the article

  • Problems increasing root size

    - by user212866
    I'm running out of space, so I tried to increase root size using this link: Increase size of root partition after installing Ubuntu in Windows Here is the output Filesystem Type Size Used Avail Use% Mounted on /dev/sda7 ext4 6,2G 5,6G 308M 95% / udev devtmpfs 965M 4,0K 965M 1% /dev tmpfs tmpfs 389M 892K 388M 1% /run none tmpfs 5,0M 4,0K 5,0M 1% /run/lock none tmpfs 972M 440K 971M 1% /run/shm /dev/sda5 fuseblk 12G 6,1G 5,8G 52% /media/Ubuntu /dev/sda2 fuseblk 278G 260G 19G 94% /media/AC4CC70D4CC6D16E I tried to allocate 16Gb in the host (/dev/sda2 which is windows 7 partition). When I get to the \ubuntu\disks folder, I only get the "new.disk" file which weighs the 16Gb allocated and not the "root.disk" file too. Also, the /dev/sda7 size doesn't increase. Could you please help me? Many thanks

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >