Search Results

Search found 2630 results on 106 pages for 'mount'.

Page 94/106 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Oracle Database Recovery Problem

    - by Palani
    I am very new to Oracle, and trying to restore a oracle 8i database on win 2000 server. I have one week old database backup (backup taken with exp command), and i want to restore it now. Now I am unable to login through sqlplus (got shutdown in progress error) I have a backup and i want to restore it, but oracle is not starting at all, and 'imp' command is failing. I started sqlplus / as sysdba and following is the log of what i am trying to do. Can some one guide me further. SQL> shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup; ORACLE instance started. Total System Global Area 143423516 bytes Fixed Size 75804 bytes Variable Size 58105856 bytes Database Buffers 85164032 bytes Redo Buffers 77824 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open SQL> shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 143423516 bytes Fixed Size 75804 bytes Variable Size 58105856 bytes Database Buffers 85164032 bytes Redo Buffers 77824 bytes Database mounted. SQL> alter database open; alter database open * ERROR at line 1: ORA-01589: must use RESETLOGS or NORESETLOGS option for database open SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01245: offline file 1 will be lost if RESETLOGS is done ORA-01110: data file 1: 'C:\ORACLE\ORADATA\ABCD\SYSTEM01.DBF'

    Read the article

  • CDROM does not appear on desktop, MACOS 10.5.7

    - by Cheeso
    When I pop a CDROM into the drive of my Macbook Pro, It spins up, I hear it, but no icon appears on the desktop. (I think it's 10.5.7; actually not sure how to verify this on Mac, but I think I saw a 10.5.7 flash by somewhere). In the finder preferences, I have "Show these items on the Desktop" set to show HDs, External Disks, and CDs, DVDs, and ipods. All three of those are checked. I do see the internal HD on the desktop. In Disk utility I can see the CD/DVD hardware. It says "MATSHITA DVD-R UJ-857E...". From Disk Utility I can eject the drive. But in Finder, there is never a CD/DVD listed under "Devices". When I insert a disk, nothing happens, I cannot see it. I also cannot boot from bootable CDROMs by holding C down . Suggestions? I am not very experienced with Mac; I have used Windows for years. EDIT Two updates: I saw this article on support.apple.com, and modified the hostconfig appropriately. It did not have the AUTODISKMOUNT entry, so I added one, rebooted. Same behavior. It does not see the CDROM in Finder, does not mount it on desktop. I put an old manufactured CDROM into the drive, and voila! it showed up on the desktop. The CD that does not appear is a GNome Partition Editor Live CD, which I guess is based on debian. That CD boots in other (non-Mac) PCs. I want to use this to adjust the Bootcamp partition. Suggestions?

    Read the article

  • Java web app deployment and ControlTier adoption

    - by Ran
    I've been searching for a configuration and deployment manager tool for my java-linux based web service and have been looking mainly at ControlTier (http://controltier.org). We operate at a medium scale (100's of hosts, multi-DC, dozens of services). There seem to be be plenty of lower level system admin tools such as chef, puppet, cfengine, bcfg2 and more and my understanding and the reason I'm calling them "low level" is that they are great for system level administration tasks such as setting up a mount, file permissions, users etc but aren't designed, for example for java deployments, which usually come with a build process and special java semantics. In many cases any tool can be used to do anything but if it was not designed for the task it can get uncomfortable. OTOH control-tier seem to have been designed just for that - java application deployments, at least that's what all the tutorials on their site demonstrate but here's the problem - The wiki at http://controltier.org/wiki/ is pretty good and stuffed with examples and the company behind the open source CT product is very responsive (pushy...) however, I'm yet to have seen any material from 3rd party users on the net. No success stories, no detailed blog posts, no best practices, no cheat sheets, not even hate letters, nothing. This plays badly for DTO solutions, CT's sponsor for two reasons, one is that it makes me suspicious what's the reason for the poor adoption? and second, what do I do if I get stuck and there's no help page on CT's wiki page and the mailing list is too slow to answer. I'm stuck with a "free" product that a consultancy company is pushing. So my question here - I'd be interested in hearing if anyone has had real world experience with CT for java based web app deployments and if he'd thumb up the product? Any other comments that may enlighten me are welcome of course...

    Read the article

  • NTBackup Error: C: is not a valid drive

    - by Chris
    I'm trying to use NtBackup to back up the C: Drive on a Microsoft Windows Small Business Server 2003 machine and get the following error in the log file: Backup Status Operation: Backup Active backup destination: 4mm DDS Media name: "Media created 04/02/2011 at 21:56" Error: The device reported an error on a request to read data from media. Error reported: Invalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. Error: C: is not a valid drive, or you do not have access. The operation did not successfully complete. I'm using a brand new SATA Quantum Dat-72 drive with a brand new tape (tried a couple of tapes). I carry out the following: Open NTBackup Select Backup Tab Tick the box next to C: Ensure Destination is 4mm DDS Media is set to New Press Start Backup Choose Replace the data on the media and press Start Backup NTBackup tries to mount the media Error Message shows: The device reported an error on a request to read data from media. Error reported: INvalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. On checking the log I find the following: Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8018 Date: 04/02/2011 Time: 22:02:02 User: N/A Computer: SERVER Description: Begin Operation For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. and then; Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8019 Date: 04/02/2011 Time: 22:02:59 User: N/A Computer: SERVER Description: End Operation: The operation was successfully completed. Consult the backup report for more details. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Authenticate VNC session with ConsolKit?

    - by lori
    I have a linux machine running Fedora 16 in a cupboard. It has no screen or keyboard. I connect to it using a combination of vnc and ssh. Recently, after an update, I have had issues with authentication on the machine. If I vnc to it, the kde desktop pops up an error dialog every few minutes saying Authorization failed. Failed to obtain authentication. If I plug in a USB drive it fails to mount, Dolphin reports an authentication issue again. I have had limited success finding the solution. AFAICT, it is an issue with ConsoleKit deeming me to be a non-local user so it prevents authentication. This is the output from ck-list-sessions: $ ck-list-sessions Session5: unix-user = '1000' realname = 'steve' seat = 'Seat6' session-type = '' active = FALSE x11-display = ':1' x11-display-device = '' display-device = '' remote-host-name = '' is-local = FALSE on-since = '2012-09-16T08:07:03.137011Z' login-session-id = '1' I have tried to update my .vnc/xstartup script to include ck-launch-session as follows: $ cat ~/.vnc/xstartup #!/bin/sh exec ck-launch-session vncconfig -iconic & unset SESSION_MANAGER unset DBUS_SESSION_BUS_ADDRESS export XKL_XMODMAP_DISABLE=1 OS=`uname -s` if [ $OS = 'Linux' ]; then case "$WINDOWMANAGER" in *gnome*) if [ -e /etc/SuSE-release ]; then PATH=$PATH:/opt/gnome/bin export PATH fi ;; esac fi if [ -x /etc/X11/xinit/xinitrc ]; then exec ck-launch-session /etc/X11/xinit/xinitrc fi if [ -f /etc/X11/xinit/xinitrc ]; then exec ck-launch-session sh /etc/X11/xinit/xinitrc fi [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources exec ck-launch-session xsetroot -solid grey exec ck-launch-session xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & exec ck-launch-session twm & This has not helped. How can I either authenticate myself to ConsoleKit, or trick it into believing I am a local user?

    Read the article

  • How to forceably unmount stuck network share in Mac OS X?

    - by Kyle Lowry
    Not long ago my Xserve failed (power loss) while an iMac was working with files on a particular network share (called "Work Share"). This volume, "Work Share", is now stuck. It can't be seen in the GUI, you can only detect it using the Terminal. Even after power cycling over the course of several days, ls -a still shows that it's there, but I can't unmount it using any command - not even as root in single user mode. Every time I attempt to unmount that volume, I get the message that the resource is busy (which it can't possibly be since nothing is using it), and error code 4915. The issue is that when I mount the real "Work Share", it internally is renamed to "Work Share-1", which breaks all my links, and several files in the share. If I can't unmount the false "Work Share", then that computer would be unuseable without a reformat, I would imagine - and I don't want it to have to come to that. I've tried everything I can think of - it looks like sudo can't save me now. Any ideas on how to unmount this stuck volume?

    Read the article

  • Why apache throws 403 on index file after install?

    - by den-javamaniac
    Hi. I've just installed apache and php from sources using next commands: ./configure --prefix="/mnt/workspace/servers/web/apache-2.2.17" \ --enable-info --enable-rewrite --enable-usertrack --enable-mime-magic for apache and ./configure --with-apxs2=/mnt/workspace/servers/web/apache-2.2.17/bin/apxs \ --prefix=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-config-file-path=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-mysql=mysqlnd for php. After adjusting configuration (httpd.conf) and starting apache it gives a 403 response on http://localhost:8060/index.html (presuming that 8060 is used) request. There are next directory settings in httpd.conf: <Directory "/mnt/workspace/servers/web/apache-2.2.17/htdocs"> ... Order allow,deny Allow from all ... </Directory> <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> It should be noted that I've got apache on a mounted (default auto mount configured while installing ubuntu) partition. Log Files Access log: ::1 - - [12/Feb/2011:17:48:30 +0200] "GET / HTTP/1.1" 403 202 ::1 - - [12/Feb/2011:17:48:31 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /favicon.ico HTTP/1.1" 403 213 Error log: [Sat Feb 12 18:59:13 2011] [notice] Apache/2.2.17 (Unix) PHP/5.3.5 configured -- resuming normal operations [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to / denied [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to /favicon.ico denied [Sat Feb 12 18:59:36 2011] [error] [client ::1] (13)Permission denied: access to /index.html denied

    Read the article

  • Shared files folder in Amazon Elastic Beanstalk environment

    - by por
    I'm working on a Drupal application, which is planned to be hosted in Amazon Elastic Beanstalk environment. Basically, Elastic Beanstalk enables the application to scale automatically by starting additional web server instances based on predefined rules. The shared database is running on an Amazon RDS instance, which all instances can access properly. The problem is the shared files folder (sites/default/files). We're using git as SCM, and with it we're able to deploy new versions by executing $ git aws.push. In the background Elastic Beanstalk automatically deletes ($ rm -rf) the current codebase from all servers running in the environment, and deploys the new version. The plan was to use S3 (s3fs) for shared files in the staging environment, and NFS in the production environment. We've managed to set up the environment to the extent where the shared files folder is mounted after a reboot properly. But... The Problem is that, in this setup, the deployment of new versions on running instances fail because $ rm -rf can't remove the mounted directory, and as result, the entire environment goes down and we need restart the environment, which isn't really an elegant solution. Question #1 is that what would be the proper way to manage shared files in this kind of deployment? Are you running such an environment? How did you solve the problem? By looking at Elastic Beanstalk Hostmanager code (Ruby) there seems be a way to hook our functionality (unmount if mounted in pre-deploy and mount in post-deploy) into Hostmanager (/opt/hostmanager/srv/lib/elasticbeanstalk/hostmanager/applications/phpapplication.rb) but the scripts defined in the file (i.e. /tmp/php_post_deploy_app.sh) don't seem to be working. That might be because our Ruby skills are non-existent. Question #2 is that did you manage to hook your functionality in Hostmanager in a portable way (i.e. by not changing the core Hostmanager files)?

    Read the article

  • How to install Red Hat Enterprise Linux on Apple Macbook Pro MacBookPro4,1

    - by Todd V. Rovito
    I have a one year old Mac Book Pro that I am trying to get RHEL 5.4 installed on via bootcamp. No matter what I do I can't get the installer to boot. I have tried multiple DVD's and even verified the install works on a new Mac Book Pro. Most of the time the installer simply locks up. I usually use Linux text with all-generic-ide on the boot line. I removed the ide parameter and I just used linux text. The results I get are that a bunch of kernel messages appear then the background turns blue and a thin text box pops up saying its loading ata..... something it disappears too fast for me to read. Then the machine freezes. I pressed the alt function keys to see if I could look at the system log, here is what it says: Alt-f3 says "trying to mount CD device hda" Alt-f4 says status error: hda: lastFailedSense Hda: Failed opcode was: unknown Hda: Lost interrupt Hda: Drive not ready for command Ide-cd: command 0x3 timed out Above this junk it looks like it found the partition because it knew it was 20 GB and listed as /dev/sda3. I think it has something to do with the CD drive, is that possible? Thanks again for the support. PS I posted in the apple support forums ( Apple.com Support Discussions Boot Camp Installation and Storage) and didn't get an answer.

    Read the article

  • vmware vmdk disk problem

    - by dmtr
    Hello, I have a vmware esxi 4 server and 2 storage servers (mount as nfs). Between the storage servers (fedora 14) is made drbd cluster (dual primary) and ocfs2 filesystem, also every server has local partition with ext4 filesystem, both are mounted as nfs on esxi server. When i tried to copy a virtual machine (naturally it power off) files from ext4 partition to ocfs2 partition, vmdk total file size is different, but md5sum is the same. on ext4 partition: # ls -la total 28492228 -rw------- 1 root root 42949672960 Jan 14 14:46 disk-flat.vmdk # md5sum disk-flat.vmdk 0eaebe3138beb32f54ea5de6dfe5a987 on ocfs2 partition: # ls -la total 13974660 -rw------- 1 root root 42949672960 Jan 14 16:16 disk-flat.vmdk # md5sum disk-flat.vmdk 0eaebe3138beb32f54ea5de6dfe5a987 When i power on the virtual machine from ocfs2 partition it dosn't work. I have a windows on the virtual machine and it freez?s after windows logo. From ext4 partition the virtual machine is worked. Test with linux (create and install on ext4 partition and copy) the same problem appears. When i create a virtual machine directly from ocfs2 partition, there are no problems. I tried to copy via vSphere client, and i have the same problem. Any suggestions ?

    Read the article

  • Finding a shared HDD attached to the network from my F-13 machine

    - by Ramy
    Sorry for the slew of n00bie questions, but here is one more. I recently partitioned my 1.5TB harddrive according to this question I then bought this to attach the harddrive to my network. The problem is, how do I navigate to the hard drive to move files over the network to the HDD. should this be moved to serverfault? update: the disk isn't even showing up when i call "fdisk -l" (as root). How can I mount it if I can't even find it? [root@Moonface ~]# /sbin/fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00018598 Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 19458 155777024 8e Linux LVM Disk /dev/dm-0: 53.7 GB, 53687091200 bytes 255 heads, 63 sectors/track, 6527 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/dm-1: 4764 MB, 4764729344 bytes 255 heads, 63 sectors/track, 579 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-1 doesn't contain a valid partition table Disk /dev/dm-2: 101.0 GB, 101032394752 bytes 255 heads, 63 sectors/track, 12283 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-2 doesn't contain a valid partition table

    Read the article

  • Lenovo S10 Ideapad will not boot while original hard drive is installed, neither from hard drive or

    - by aki
    Hello, first time posting here so I'll try to be very clear. I have a Lenovo S10 Ideapad netbook which fails to boot to an OS. It shows the Lenovo splash screen and can get to the BIOS but it doesn't get to GRUB (was dual booting Ubuntu 9.10 and Win 7, was working fine for months, ie this isn't a new dual boot gone bad). After the splash screen it displays a flashing cursor in the upper left corner. Power cycled to no avail. Here is what I have done trying to narrow the problem down: The machine will boot to Ubuntu using an install/live USB drive, but only if ANOTHER hard drive is installed or NO hard drive is installed. The boot order always lists USB first. Also, there is a 2 gb RAM upgrade but I think that's fine; the Ubuntu USB drive boots fine with it, and "free" sees the whole 2gb of memory. So it seems like the hard drive is bad. I was able to put the bad drive in a different laptop and mount it to recover files. I'm ready to replace the bad hard drive, but I would like to know if this situation makes any sense. If the hard drive is bad, shouldn't I still be able to boot with the Ubuntu USB drive while the bad drive is installed? I would have expected the machine to boot into Ubuntu anyway even if with a bad drive, since the boot order lists USB first. But it seems that when the bad drive is installed, the machine ignores the USB drive and hangs with the flashing cursor. Thanks for any ideas! Sorry for the long post, I just want to put all the info I have up front! Basically I'm going to buy a new drive, but I am mostly curious if this is a typical or at least not unusual situation.

    Read the article

  • How to set up GRUB2 chainloader to other Grub (Fedora, Debian) on GPT

    - by basic6
    I'm trying to set up a dedicated GRUB2 which (chain-)loads another GRUB on a disk with GPT partition table. Relevant partitions: /dev/sda1 BIOS_BOOT /dev/sda2 BOOT (ext2) /dev/sda3 FEDORA (ext4) /dev/sda6 DEBIAN (ext4) I installed Fedora first, using /dev/sda2 as boot partition. Then I installed Debian. The Debian installer recognized the Fedora installation and added it as boot entry, then installed its GRUB into the MBR. While this works for the moment, it's pretty messy, because every Debian update may change the boot config, removing the Fedora entry (tried it) and the other way around. That's why I want both systems to have their own boot loader and one main boot loader (that could reside on /dev/sda2), which loads one of them. This is what I've tried: Moved everything from /dev/sda2 to /dev/sda3/boot Removed /boot mount point in Fedora (so /dev/sda2 isn't used anymore) From a live Linux, installed GRUB2 to the MBR (grub-install --boot-directory=sda2 /dev/sda) Wrote a menu.lst: title Fedora root (hd0,2) chainloader +1 (Again, for Debian) Converted that to a grub.cfg script (grub-menu2cfg or something like that) When booting, actually got a GRUB2 menu with "Fedora" (and "Debian") When selecting any one of those: error: invalid signature Issued "grub-install /dev/sda6" (and ...sda3) from all kinds of live Linux systems, all of which failed with another error message (in the case of the Debian installer, without explanation at all) Added --force to the chainloader line, now it says "loading", then reboots Found douzens of howtos, none of which seem to work for me Since I get the self-made GRUB2 menu on bootup, I've at least successfully installed the first stage of GRUB, right? When trying to chainload, some signature is checked and seems to be wrong - how do I fix it? The boot menus (Fedora with its different Kernel versions and Debian with Debian and Fedora as well) are now on the system partitions (/dev/sda3, /dev/sda6), is there anything else to do on these partitions, so they can be chainloaded? Any help is greatly appreciated.

    Read the article

  • Rebuild Fedora 19 ISO adding Kickstart for USB install

    - by dooffas
    I am attempting to edit a Fedora 19 DVD ISO to add a kickstart file. I then need this ISO burnt to a USB stick for instillation. The error I get when booting is Warning: Could not boot. Warning: /dev/root does not exist To try and determine which part of the process is failing I have broken the process down in to separate stages. Step 1: Burn the original ISO "Fedora-19-x86_64-DVD.iso" (Available - here) to a pendrive and see if that will install. dd if=/path/to/iso of=/dev/sdc Burning this image was successful and it installed without issue. Step 2: Exctract the ISO, repackage it and burn it to a pendrive and see if that will install. PLEASE NOTE: The final command in this section has been broken down in to multiple lines for ease of reading, in fact it was run as a single command on one line. mkdir -p /mnt/linux mount -o loop /tmp/linux-install.iso /mnt/linux cd /mnt/ tar -cvf - linux | (cd /var/tmp/ && tar -xf - ) cd /var/tmp/linux xorriso -as mkisofs -R -J -V "NewFedoraImage" -o ouput/file.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -isohybrid-mbr /usr/share/syslinux/isohdpfx.bin . This iso was then burnt to a pendrive as before. dd if=/path/to/iso of=/dev/sdc This ISO burnt to the pen drive with no problem and will boot. I then see the fedora options screen. After choosing either "Install Fedora 19" or "Test this media & install Fedora 19" I then receive the errors highlighted above. This means the kickstart file is not to blame, but repackaging the ISO. Is there something I am missing in the repackaging process? Any input would be great! NOTE: If it is of any help, I attempted Step 2 with an Ubuntu server ISO and the process was successful.

    Read the article

  • Adding third disk as a single disk in a server with an existing RAID1

    - by slowhandsolo
    I've got a ProLiant DL360 G5 server (Fedora 13) with two SAS disks in a hardware RAID 1, working fine. Now I hot plugged another SAS disk. I'd like to configure this new hard disk out of my RAID, as a single non-RAID disk (ex. /dev/sdb). Even after rebooting the server, I can't see the new disk with "fdisk -l". It displays only my hardware RAID, but not the new disk. [root@myserver]# fdisk -l Disco /dev/cciss/c0d0: 300.0 GB, 299966445568 bytes Disposit. Inicio Comienzo Fin Bloques Id Sistema /dev/cciss/c0d0p1 * 1 126 512000 83 Linux /dev/cciss/c0d0p2 126 71798 292422656 8e Linux LVM Disco /dev/dm-0: 234.9 GB, 234881024000 bytes Disco /dev/dm-1: 10.5 GB, 10536091648 bytes Disco /dev/dm-2: 21.0 GB, 20971520000 bytes Disco /dev/dm-3: 31.5 GB, 31474057216 bytes Disco /dev/dm-4: 1577 MB, 1577058304 bytes However, I can see the new disk using the HP Array Configuration Utility CLI for Linux "hpacucli": [root@myserver]# hpacucli => controller slot=0 physicaldrive all show status physicaldrive 1I:1:1 (port 1I:box 1:bay 1, 300 GB): OK physicaldrive 1I:1:2 (port 1I:box 1:bay 2, 300 GB): OK physicaldrive 1I:1:3 (port 1I:box 1:bay 3, 300 GB): OK => controller slot=0 pd all show detail Smart Array P400i in Slot 0 (Embedded) array A physicaldrive 1I:1:1 Port: 1I Box: 1 Bay: 1 physicaldrive 1I:1:2 Port: 1I Box: 1 Bay: 2 **unassigned** physicaldrive 1I:1:3 Port: 1I Box: 1 Bay: 3 Status: OK Drive Type: **Unassigned Drive** As you can see, I've got two SAS disks in a RAID 1 and the new disk as "unassigned". Is there any way to work with the new disk as another non-RAID single disk? If relevant, I want to create a new partition in my new disk, format it with mkfs and mount it, but as I can't see it with fdisk, I don't know how to do it. Thanks!

    Read the article

  • OpenSolaris livecd, NForce NIC driver, and NTFS USB mounting. Oh My!

    - by Jake Wharton
    I'm attempting to install OpenSolaris 2009.06 on my server. Before I do I would like to test that everything works and am running in to problems. It has an Abit AN-M2 motherboard with an NForce chipset. The driver config utility says that I need a third-party driver and links me to http://homepage2.nifty.com/mrym3/taiyodo/eng/. Scrolling to the bottom, I have downloaded both tgzs just in case. Now the fun part: The only way to get this on to the computer is via a USB drive since I can't access the network. Also, install CD in the drive otherwise I'd just burn them to DVD. Since my USB key is NTFS formatted I cannot mount it since the install CD seems to be lacking NTFS drivers which require more downloaded packages. What should I do? The server will simply be a dumb NAS and I know that there exists other OpenSolaris-based flavors such as Nexenta but from what I read the stock install is likely the best. If this is not the case and pursuing a different flavor is required or better I will also accept that as an answer (but please don't jump straight to it).

    Read the article

  • Permission denied problem in Freenas + Transmission

    - by Torbjörn Karlsson
    Running Freenas 0.7.2 (5543) and Transmission 2.11 The problem it that i can not save a torrent where ever i want.. For example... I can save in: /nmt/1-500gb/Tv/dexter but i can not save in /nmt/4-1000gb/tv/Lost When i try to save in the lost folder I get a permission denied error in the Web interface. But when I try to save the same torrent file in the dexter folder everything works fine... This is probably an easy thing to fix, but I'm new to Freenas. The user name for Transmission is TorrentUser if that helps. Now I find out that I can not browse the disk in Quixplorer.. I can browse nmt/4-1000gb/ but not /nmt/1-500gb When I try to browse the nmt/4-1000gb/ I get Unable to read directory $ mount /dev/md0 on / (ufs, local) devfs on /dev (devfs, local) procfs on /proc (procfs, local) /dev/fuse1 on /mnt/5 - 500gb (fusefs, local, synchronous) /dev/fuse2 on /mnt/2 - 1000gb (fusefs, local, synchronous) /dev/fuse3 on /mnt/3 - 1000gb (fusefs, local, synchronous) /dev/fuse4 on /mnt/4 - 1000gb (fusefs, local, synchronous) /dev/fuse5 on /mnt/320GB - USB (fusefs, local, synchronous) /dev/md1 on /var (ufs, local) /dev/da0a on /cf (ufs, local, read-only) /dev/fuse0 on /mnt/1 - 500gb (fusefs, local, synchronous) Dont work : 1 - 500gb 2 - 1000gb 3 - 1000gb Works: 320GB - USB 4 - 1000gb 5 - 500gb And this 3 disk is the same disks that I can save my torrents to. Ps. Every disk works perfect when i use ftp...

    Read the article

  • Can I format a Veritas cluster shared volume from windows?

    - by spaghettidba
    We have a Microsoft Failover Cluster with dynamic disks managed by Veritas Storage Foundation. Today the sysadmins added a new disk for SQL Server but the cluster size on the volume was wrong, so I issued a quick format to change it. The disk volume failed, the SQL Server group failed as well and the cluster became unresponsive. After some minutes I managed to fail over to a passive node. The SAN admins say it's my fault because I shouldn't have formatted the disk from the Windows format applet, but I should have used Veritas Enterprise Administrator instead. Can a format operation bring offline a whole cluster group this way? Relevant error messages: From the eventlog: The cluster resource host subsystem (RHS) stopped unexpectedly. An attempt will be made to restart it. This is usually due to a problem in a resource DLL. Please determine which resource DLL is causing the issue and report the problem to the resource vendor. From the cluster.log ERR [RCM] rcm::RcmResControl::DoResourceControl: ERROR_RESOURCE_CALL_TIMED_OUT(5910)' because of 'Control(STORAGE_GET_DISK_INFO_EX) to resource 'NameOfTheDiskGroup' timed out.' Veritas Documentation: Excerpt from Symantec's documentation: Note: Before manually creating the resource, you must format the cluster-shared volume with NTFS using the VEA GUI and mount it on the node where you are trying to create the resource. Does this mean the disk cannot be formatted from Windows? I don't read it that way. For the record, I formatted many disks using the Windows applet in the past and nothing bad happened.

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • MD RAID 1 with external bitmap doesn't fully resync

    - by user64744
    I have an interesting configuration: dual boot system with a RAID 1 that needs to be visible in both Windows and Linux. The Windows install is Win 7 Enterprise, and the Linux install is Kubuntu 10.04. To get the RAID to work, I set it up using Windows's "Dynamic Disks" RAID 1, and brought it up in Linux using MD with no persistent superblock, and a write-intent bitmap on another partition. (Without this bitmap, MD had no way of knowing that the array was in sync, and would do a complete resync every time the array started.) The array is assembled like so: mdadm --build /dev/md1 -l 1 -n 2 -b /var/local/md1.bitmap /dev/sdb2 /dev/sdc2 I expected that the first time I ran this command, it would resync the array, write out a bitmap with no dirty chunks, and all would be good. This wasn't the case: after completing the resync, the bitmap was mostly clean, but about 5% dirty blocks remained, as revealed by mdadm -X /var/local/md1.bitmap I didn't mount the filesystem on /dev/md1 or touch it in any other way. I then found that stopping and restarting the array: mdadm --stop /dev/md1 mdadm --build /dev/md1 -l 1 -n 2 -b /var/local/md1.bitmap /dev/sdb2 /dev/sdc2 did indeed read in the bitmap, with an ensuing resync that went quickly because most of the blocks were marked clean. The confusing part is that this resync further reduced the number of dirty blocks, but still did not remove all of them. By repeatedly stopping and restarting I could slowly bring the dirty block count down to around 0.6%, where it seemed to level out. Any ideas what could be causing this? It smells to me of a race condition somewhere that leads to blocks either being skipped over during synchronization or not properly cleared from the bitmap, but I really have no evidence to prove this. It doesn't look like hardware issues since both drives are new and have zero read errors and reallocated sectors reported by smartctl -a.

    Read the article

  • Install Windows 7 from ISO image

    - by Albert
    Hi, I have an ISO file of the Windows 7 DVD and I want to install it on my PC which currently only runs Linux. I don't have any DVD drive. I have some unpartitioned space on one disk where I want to install it in. When I am doing this for Linux, I usually just create the partitions from the running system, format them, mount them, copy files over, chroot into it, setup the stuff and I can boot into it (or I use some of the uncountable available scripts which do exactly that automatically). However, I have no idea how to do the same thing with Windows. So far, I tried with VMware, i.e. I gave it direct full access to the disk where I want to install it in, installed it there, then tried to boot natively into it. The Windows logo showed up but after maybe 3 seconds or so, it crashes. Safe mode also crashes. I already expected that this probably would exactly behave the way it does right now because I have heard that Windows is quite sensible about hardware changes (i.e. the VMware hardware and the real hardware). However, how can I fix it now that it works? Or I could also just delete it again and try just over. But how exactly? I also searched for ways to boot directly into an ISO file. There seem to be ways to do that via GRUB (and maybe some additional boot loader), although quite complicated. I already tried one method (GRUB: map ...iso (hdX)), however, that didn't worked. Also, even if it does work, I will get into trouble when I boot into the newly installed Windows and it requests for the DVD (because it does that at the first boot into the new system). Seems all quite complicated. Isn't there some easy way like I would do it for Linux? Or what would be the easiest way to get what I want?

    Read the article

  • LVM and cloning HDs

    - by jcea
    Using Linux, I have several backup levels. One of them is a periodical sector by sector copy (using dd) of my laptop harddisk to an external USB disk. Yes, I have other backups too, like remote rsync. This approach (the disk dd) is OK when cloning a HDD with no LVM volumes, since I can plug the external disk anytime and mount the partitions simply mounting /dev/sdb* instead of /dev/sda*. Trivial and handy. Today I moved ALL my harddisk (including the /boot) to LVM. Everything works fine. I will stress it for a couple of days, and then I will do a sector by sector copy to my external harddisk. Now I have a problem, I guess. If in the future I plug the external USB HDD to recover any file, the OS will detect a duplicate LVM configuration, with the same name and the same UUID. Even doing a vgrename (which LVM would be renamed, the internal HDD or the external HDD?), the cloned UUID will not change. Is there any command to change name and UUID? Ideally I would clone the HDD and then change the LVM group name and its UUID, but I don't know how to do it. Another related issue would be... In the past I have booted my laptop using the external disk, using the BIOS boot menu and changing GRUB entries manually to boot from /dev/sdb instead of /dev/sda. But now my current GRUB configuration boots directly from a LVM logical volume, something like: set root='(LVM-root)' in my grub.cfg. So... What is going to happen with duplicated volumes? Any suggestion? I guess I could repartition my external harddisk and change backup strategy from dd to rsync, but this disk has windows installed too, and I really would like to have a physical "real" copy.

    Read the article

  • Managing disk in a VM

    - by dst
    I'm replacing my two old rack servers with a new one that has plenty of power to take over the functionality my current servers. The server is a 4U rack mount with 16 3.5" SAS drive bays, two 2.5" bays, a Xeon E3-1230v2 CPU and 32GB of ECC RAM. My issue is the following. I would like to have a FreeBSD file server with ZFS managing disks. However, I need other VMs for e.g. a shell/git server, mail server etc. I'm wondering how to deal with the following issues: I want ZFS to fully manage the disks, so I'm not using any hardware RAID. Should I pass the SAS controller directly to the FreeBSD system as passthrough PCI? I want to maximize the reliability of the setup. On what disks should I install the hypervsor and keep server system disks? For (2) I have the option of having a RAID setup on the SAS controller and using that as system disk to store the hypervisor as well as VM images. However, this makes PCI passthrough to the file server impossible. Another option is using the two 2.5" bays. In terms of reliability how are SSDs compared to e.g. WD RE4 disks? Would it make sense to have two SSDs in software RAID as boot disks for the hypervisor or should I just go with e.g. WD RE4 disks in a software RAID setup. I also need to think about where to store the mails for the mail server, but this could be done over NFS between the VMs. BTW, this is for home use, so the load is not really that big. What I'm looking for is best practices for splitting up a server.

    Read the article

  • Hugepages not utilized by MySQL 5.0, CentOS 5

    - by TechZilla
    I've set up Hugepages, but i'm not seeing any of them reserved. Have I missed a step, or for some particular reason, is MySQL is unable to utilize the Hugepages? I have not created a mount of hugetlbfs, although from what I read, MySQL would not call pages in such a manner. If I'm wrong, please let me know, as that would be a trivial solution. Almost all my MySQL tables are using InnoDB. NOTE: I created a hugetlbfs, no change as expected. Is it possible that rebooting would rectify this situation? I would not want to go through the procedure, as this is high availability, but would do so if necessary. This is the configurations, which I believe are relevant. /etc/sysctl.conf ... ## Huge Pages vm.nr_hugepages = 4096 vm.hugetlb_shm_group = 27 ## SHM kernel.shmmax = 34359738368 kernel.shmall = 8589934592 ... /etc/security/limits.conf ... mysql soft nofile 12888 mysql hard nofile 51552 @mysql soft memlock unlimited @mysql hard memlock unlimited /etc/my.cnf [mysqld] large-pages ... grep Huge /proc/meminfo HugePages_Total: 4096 HugePages_Free: 4096 HugePages_Rsvd: 0 Hugepagesize: 2048 kB id mysql uid=27(mysql) gid=27(mysql) groups=27(mysql) context=root:system_r:unconfined_t:SystemLow-SystemHigh tail -6 /var/log/mysqld.log InnoDB: HugeTLB: Warning: Failed to allocate 1342193664 bytes. errno 12 InnoDB HugeTLB: Warning: Using conventional memory pool 120808 15:49:25 InnoDB: Started; log sequence number 0 1729804158 120808 15:49:25 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution I would really appreciate any help, I'm completely out of ideas. If I missed any more relevant configs, or diagnostics, please comment and I'll add it to the question.

    Read the article

  • how to setup .ssh directory inside an encrypted volume on Mac OSX and still have public key logins?

    - by Vitaly Kushner
    I have my .ssh directory inside an encrypted sparse image. i.e. ~/.ssh is a symlink to /Volumes/VolumeName/.ssh The problem is that when I try to ssh into that machine using a public key I see the following error message in /var/log/secure.log: Authentication refused: bad ownership or modes for directory /Volumes Any way to solve this in a clean way? Update: The permissions on ~/.ssh and authorized_keys are right: > ls -ld ~ drwxr-xr-x+ 77 vitaly staff 2618 Mar 16 08:22 /Users/vitaly/ > ls -l ~/.ssh lrwxr-xr-x 1 vitaly staff 22 Mar 15 23:48 /Users/vitaly/.ssh@ -> /Volumes/Astrails/.ssh > ls -ld /Volumes/Astrails/.ssh drwx------ 3 vitaly staff 646 Mar 15 23:46 /Volumes/Astrails/.ssh/ > ls -ld /Volumes/Astrails/ drwx--x--x@ 18 vitaly staff 1360 Jan 12 22:05 /Volumes/Astrails// > ls -ld /Volumes/ drwxrwxrwt@ 5 root admin 170 Mar 15 20:38 /Volumes// error message sats the problem is with /Volumes, but I don't see the problem. Yes it is o+w but it is also +t which should be ok but apparently isn't. The problem is I can't change /Volumes permissions (or rather shouldn't) but I do want public key login to work. First I thought of mounting the image on other place then /Volumes, but it is automaunted on login by standard OSX mounting. I asked about it here: How to change disk image's default mount directory on osx The only answer I got is "you can't" ;) I could hack my way around, by writing some shellscript that will manually mounting volume at a non-standard location but it would be a gross hack, I'm still looking for a cleaner way to do what I need.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >