Search Results

Search found 2630 results on 106 pages for 'mount'.

Page 96/106 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Execute encrypted files but don't let anybody read them.

    - by Stebi
    I want to provide a virtual machine image with an installed web application. The user should be able to boot the vm (don't login, just boot) and a webserver should start automatically. The point is I want to hide the (ruby) source code of the web application from everyone as there is no obfuscator for ruby. I thought I could use file system encryption to encrypt the directory with the sourcecode (or even a whole partition). But the webserver user must be able to read it automatically after booting. Nobody is allowed to login as the webserver user (or any other user) so no other can read the contents. My questions are now: Is this possible? Because I give away the whole vm everybody could mount its virtual discs and read them (except the encrypted one). Is it now possible to find the key the webserver user needs to decrypt the files and decrypt them manually? Or is it safe to give such a vm away? The problem is that everything needed to decrypt must be included somewhere in the vm else the webserver cannot start automatically. Maybe I'm completely wrong and you have another tip for me securing the source code.

    Read the article

  • Sparc Solaris 2.6 will not boot

    - by joshxdr
    I have a very old Sparc Solaris network that was working fine last week, but after a power outage none of the workstations will boot. The network looks like this: host A: solaris 2.6, shares /export/home to network by NFS host B: solaris 8, runs NIS server. Mounts /export/home/ by NFS. host C: RHEL5, shares /share to network by NFS. Mounts /export/home/ by NFS. I figured that the main problem was host A, since you need the home directories available for the other workstations to boot(?). Host A does not mount anything by NFS as far as I know. However, this workstation will NOT boot. The OBP bootup sequence looks like this: Boot device <blah> configuring network interface le0 Hostname <hostname> check file system <everything ok> check ufs filesystem <everything ok> NIS domainname is <name> starting router discovery starting rpc services: rpcbind keyserv ypbind done setting default interface for multicast: add net 224.0.0.0: gateway <hostname> <HANGS at this point> Is there some kind of debug mode so that I can get more detail as to why the workstation won't boot? Is my network structure inherently susceptible to power outage? Is there a way I can boot up to command line so I can at least turn off the NFS mounting?

    Read the article

  • What is the difference between a PDU and a power strip (both 120V, 15A)?

    - by rob
    I just chatted with an APC rep about upgrading the UPSes at our office. She recommended a single higher-capacity 6-outlet Smart-UPS to replace the four Back-UPS units we currently have. When I asked how she recommended plugging in all the current devices, she recommended using a APC's AP9567 PDU, but said not to use a power strip. At first she said I had to use an APC brand PDU, but after I inquired about using a Tripp-Lite PDU, she said any brand PDU would be fine. The APC PDU previously referenced looks like a standard 120V power strip with overload protection but no surge protection. Other than overload protection (which seems redundant if plugging into the UPS), is there something else I'm missing, or should any power strip (without surge protection) be fine? Edit: I didn't mention it earlier, but we don't have a proper rack--though I did still plan to mount the PDU or power strip to something. I guess I'm wondering if there's any special reason I should pay as much as $180 for the low-end APC PDU (which just looks like a power strip to me) vs. $20-$30 for a workbench power strip.

    Read the article

  • STOP 0x7b booting from iSCSI

    - by Michael
    Hi, I've a Windows 2008 SBS running. It boots of iSCSI. That setup worked for months until yesterday. I intended to reboot and gained a: STOP 0x0000007b INACCESSIBLE_BOOT_DEVICE and no idea why. My setup hasn't changed. No new controller, no new or changed iSCSI targets, no new Network Card or IP address changes. I had all Windows Updates on it. Last known good: same STOP. Allow unsigned drivers: same STOP. Safe mode (all variants): same STOP. Mount target from a client: works. Filesystem check fine. I booted of the SBS DVD but in computer repair options my target doesn't appear. When i choose setup the target appears. So, how can i diagnose what's going wrong? Any helpful tools? Any hints? Thanks in advance Michael

    Read the article

  • Using Samba to share a folder from a Linux guest with a Windows host in VirtualBox

    - by AmV
    I would like to share a folder from a Linux Guest with a Windows host (with read and write access if possible) in VirtualBox. I read in these two links: here and here that it's possible to do this using Samba, but I am a little bit lost and I need more information on how to proceed. So far, I managed to set up two network adapters (one NAT and one host-only) and install Samba on the Linux guest, but now I have the following questions: What do I need to type in samba.conf to share a folder from the Linux guest? (the tutorial provided in one of the links above only explains how to share home directories) Are there any Samba commands that need to be executed on the guest to enable sharing? How do I make sure that these folders are only available to the host OS and not on the Internet? Once the Linux guest is setup, how do I access each of the individual shared folders from the Windows host? I read that I need to mount a drive on Windows to do this, but do I use Samba logins, or Linux logins, also do I use localhost? or do I need to set up an IP for this? Thanks!

    Read the article

  • Delay init from starting a service for a period of time?

    - by Matthew
    I am trying to get a rudimentary NFS server up and running. Right now the server is configured as an NFS server due to a workaround for a vendor issue not supporting direct attached clustered storage, which we are trying to get them to resolve. The vendor software is Splunk. The splunk feature we are using requires files be located on shared storage (which for us is /mnt/nfs until they support a real clustered filesystem). Currently the server has a GFS2 filesystem mounted at bootup (it is the only server with the filesystem actively mounted so there should be no problems with locking). We went with GFS2 so switching over to a clustered filesystem is easy should the vendor begin supporting it. NFS is configured to mount that filesystem at /mnt/nfs, which the splunk installation than sees. Splunk is configured to find it's configuration files in /mnt/nfs. However, I am running into a problem where the splunk daemon starts before nfs is finished loading, and because it sees nothing at /mnt/nfs it starts creating files there, and then when the files disappear (nfs finishes mounting the share), splunk craps out. Splunk is set to run at runlevel 3, S90. NFS is set at runlevels 2-5, S60. Is there any way to delay the startup of the splunk process further?

    Read the article

  • Move files from ftp server to s3

    - by lev
    I would like to set up an ftp server, where users will upload files, and for each file, put it on s3 storage, and delete it from the ftp server. (the server runs on ec2 ubuntu) Here are the stuff I already tried, with no success.. Mount s3 bucket using s3fs. I followed those instructions, but there is a bug in the latest version of s3fs, that prevents it from working. The bug was fixed on the develop branch, but I don't want to use unstable version on my production. Use vsftpd and using s3cmd sync via cron to sync the files periodically. The problem with that approach, is that s3cmd can start running in the middle of a file upload, and start synching the incomplete file. Also s3cmd doesn't give any feedback it the sync fails, so I have no way of knowing if I can delete the files after the sync command finished running. Use pure-ftpd's upload script feature (which allows to run a script after a file is finished uploading), but I noticed that if the file upload was failed in the middle, the script will run anyway, and I have no way of knowing if the upload was successful or not. I've been at it for a few days now, and I'm at a loss here. Any suggestions will be welcomed.

    Read the article

  • Poor write performance on Debian server running NFS with 22TB exported JFS filesystem

    - by user143546
    I am currently running a debian server that is exporting a large JFS filesystem (22TB) over NFS (nfs-kernel-server.) When attempting to write to the NFS share, the performance is very poor. The 22TB disk is sitting on a NAS mounted using iSCSI. It will bust for a moment near expected line speed, and then sit idle for several seconds. Very little traffic measured in the low kb/sec. The wait peeks on write. When reading from the NFS mount, the system operates at expected speeds (11MB/sec). The issue does not occur when using SFTP, rsync, or local coping (non-nfs). The issue persists between stable and testing releases. On the same machine I have a 14TB ext4 filesystem using the exact same export configuration that does not share the issue. This share is not in regular use and thus not consuming resources. NFS Server: cat /etc/exports /data2 10.1.20.86(rw,no_subtree_check,async,all_squash) cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /etc/default/nfs-kernel-server RPCNFSDCOUNT=8 RPCNFSDPRIORITY=0 RPCMOUNTDOPTS=--manage-gids NEED_SVCGSSD= RPCSVCGSSDOPTS= NFS Client: cat /etc/fstab 10.1.20.100:/data2 /root/incoming nfs rw,noatime,soft,intr,noacl 0 2 cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /proc/mounts 10.1.20.100:/data2/ /root/incoming nfs4 rw,noatime,vers=4,rsize=262144,wsize=262144,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.1.20.86,minorversion=0,addr=10.1.20.100 0 0 This problem has me pretty stumped. Any help would be greatly welcomed. Thanks.

    Read the article

  • KVM and libvirt: How to configure a new disc device to an existing VM?

    - by initall
    I've got an Ubuntu 9.04 server running two VM's. In /etc/libvirt/qemu/machine1.xml two disk devices are defined like this: <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <source file='/vserver/machine1/disk0.qcow2'/> <target dev='hda' bus='ide'/> </disk> <disk type='file' device='disk'> <source file='/vserver/machine1/disk1.qcow2'/> <target dev='hdb' bus='ide'/> </disk> I need more storage space in at least one of the devices and thought about adding a third hdc device by simply adding one with same style as above and re-organising my mount structure (The virtual sizes of the current qcow2 files are unfortunately limited.) My problem is that reloading libvirtd and restarting the VM do not result in a new visible device (checked with fdisk). I'm aware of extending an existing qcow2 file (converting to raw format, cat-ing/adding the new one, using smth. like gparted) - but only as a last resort. Hopefully it's something very simple I'm missing?

    Read the article

  • virtual machines, dual booting and data disks on SSD

    - by stevemarvell
    This is in planning, so if I've got the strategy wrong, please let me know. There are multiple questions here, but I think they all degenerate to the same answers. The hardware is a laptop with a single SSD. I'm trying to not lose the performance of the SSD. I plan a native dual booting Windows (plus cygwin) and Linux machine which is my BYOD and represents the development environment. I keep the codebase on a shared partition (though sometimes this is an external thunderbolt SSD) which can be natively "mounted" by whichever OS is in operation. I boot into one or the other environments depending on the task in hand. Sometime I have to develop with windows tools, but generally, Linux is my preferred development environment. It would be ideal if I could VM the other OS and run either in either. I'm going to assume, because I've not found a sensible VM based solution, that I have get samba involved to share the code partition between VMs. Is this going to blow my SSD performance in the VM? The client also supplies me with a VM for the target environment, usually linux. This is not often suited to development and is used for testing only. I normally keep two copies of this, one as a sandbox and one which I deploy to using the client's preferred method. I keep these VM snapshots on the shared partition. The latter is interacted with over the network and so has no disk sharing requirements. However, it would be useful for the sandbox to be able to "mount" the code base from the natively running OS. Is this samba or nfs again, depending on the native OS? Am I missing a trick which allows this to all work smoothly with all four environments running at once without loosing the SSD performance?

    Read the article

  • Unix apt-get doesnt download from nfs locaiton

    - by pravesh
    I have switched to unix from last 3 months and trying to understand install process and in particular apt-get. I am able to successfully install and download the packages when I configure my repository on http location in /etc/apt/sources.list file. e.g. deb http://web.myspqce.com/u/eng/rose/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free This command will download(/var/cache/apt/archive) and install the package when i use apt-get install When I change the source location to file instead of http(nfs mount point), the package is getting installed but NOT getting downloaded in /var/cache/apt/archive. deb file:/deb_repository/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free Please let me know if there is any configuration or settings that i have to make to let apt-get to both download and install package when i use (nfs)file:/ instead of http:/ in sources.list. To achieve this, I can use apt-get --downlaod-only and then use apt-get install for both download and install in two separate calls, but I want to know why package is not getting downloaded with apt-get install but only getting installed when used with file:/ in sources.list

    Read the article

  • On linux, what does it mean when a directory has size 0 instead of 4096?

    - by kdt
    Here's a strange thing I haven't seen before -- a directory whose size is reported by ls as 0 instead of 4096, and I can't create any files within it. # ls -ld lib home drwxr-xr-x. 2 root root 0 Feb 7 03:10 home <-- it has zero size dr-xr-xr-x. 11 root root 4096 Feb 4 09:28 lib # touch home/foo touch: cannot touch `home/foo': No such file or directory <-- and I can't create files in it # rm home rm: cannot remove `home': Is a directory <-- look, it really is a dir So what does it mean for a directory to have size 0 instead of 4096? Filesystem is ext4 on fedora core 14. The output of mount is: /dev/mapper/vg_dev-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/vda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) Output of du -s /home: 0 /home Output of stat /home: File: `/home' Size: 0 Blocks: 0 IO Block: 1024 directory Device: 15h/21d Inode: 34913 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2011-02-07 03:45:46.188995765 -0800 Modify: 2011-02-07 03:11:59.980995019 -0800 Change: 2011-02-06 07:58:45.874995002 -0800

    Read the article

  • a brand new FS based on a database without using fuse

    - by Devrim
    hi all, To serve millions of files out of a single directory, being able to connect to a drive from hundreds of endpoints, and for some other reasons (to avoid gluster/nfs/all fs based networking solutions), I want to evaluate the possibility of making a filesystem that's based on a mongodb (or any other). Basically, it works like fusefs, every single file is kept in mongo gridfs. In theory, I do, mount mongodbfs /mountPoint mongodb://localhost then when i say touch /mountPoint/test.txt this file is inserted into mongodb. This FS will also store uid/gid and perms with the file, we can throw hundreds of servers to it, and no useradd will be necessary. I'm not thinking to include all the features of FS, just the ones we need. My question is, how do I start my quest in finding resources, books, links, people, developers who'd help me implement this? at least a proof of concept. Is it feasible? What should I expect as a timeline for such undertaking? Please only think about gazillion small files and folders.

    Read the article

  • Dropped WD External Harddisk, now it's shown as "Not initialized"

    - by Phelios
    So, the WD my passport external harddisk is dropped, and after that, the computer is unable to read it anymore. I was hopping if I can just find another case to try if the harddisk is still readable, but looks like the hard drive itself is not a normal SATA or PATA drive. I think it's modified. So, I can't find another case that I can try on. In the computer, I still can see the drive in the "Disk Management", but it's shown as Uninitialized, no size, and no drive letter. I've also tried a couple of recovery tools. Some can't detect at all, there is one (find and mount software) that can detect but shows 0 size. None of them can recover the data. WD is willing to replace it with a new one, but I still need to recover the data. Any way I can recover the data? UPDATE: I tried initialized it from the windows Disk Manager, but it give error "The request could not be performed because of an I/O device error."

    Read the article

  • EC2 AMI won't boot after edit

    - by Eric Lars0n
    I did something stupid, I got a new laptop and copied everything over to the new one, then wiped the old one clean. Then I realized that I forgot to copy the private key out of .ssh that I use to connect to my AWS EBS backed instance. So I can't log in to my custom AMI. So I created a new Volume from the Snapshot of the AMI, then started up a public instance and attached the Volume to it, edit the sshd_config to allow for password log in. Unmounted the volume, detached it, made a snapshot of it, then made a new AMI from the snapshot. The new AMI launches, but never passes the Status Checks and is not reachable. What am I doing wrong? Or alternatively how can I fix my problem? Edit: Adding some of the console output Linux version 2.6.16-xenU ([email protected]) (gcc version 4.0.1 20050727 (Red Hat 4.0.1-5)) #1 SMP Mon May 28 03:41:49 SAST 2007 BIOS-provided physical RAM map: Xen: 0000000000000000 - 000000006a400000 (usable) 980MB HIGHMEM available. 727MB LOWMEM available. NX (Execute Disable) protection: active IRQ lockup detection disabled RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize NET: Registered protocol family 2 Registering block device major 8 XENBUS: Timeout connecting to devices! Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,0)

    Read the article

  • EC2 kernel decision and issues with creating a new machine with my AMI

    - by roacha
    I could really use some advice. I started a new instance on EC2 using Amazon's AMI and during the deployment process I selected a Kernel ID of "Use Default". I then configured my server the way that I wanted to and took a snapshot of it. I then created my own AMI to create new servers with. When I try and create a new server with this AMI the server fails to start and I get the error: EXT3-fs: sda1: couldn't mount because of unsupported optional features (240). Which appears to happen because I am selecting a kernel id of "Use default" again when building my second server. I have read that in order for this to work I need to choose the same kernel id that was used in my original server. I have deleted my original server and don't know what it was using. What is the best process to follow in order to not have these issues? Should I choose "Use Default" for my original server? How do you know which kernel it selected? Then should I just document this and always specify this during the deployment of my next servers using my custom AMI? OR should I choose a custom kernel id during the initial build and always use this one moving ahead hoping Amazon never retires it? Thanks for any advice!

    Read the article

  • Linux NFS create mask and force user equivalent

    - by Mike
    I have two Linux servers: fileserver Debian 5.0.3 (2.6.26-2-686) Samba version 3.4.2 apache Ubuntu 10.04 LTS (2.6.32-23-generic) Apache 2.2.14 I have a number of Samba shares on fileserver so that I can access files from Windows PCs. I am also exporting /data/www-data to the apache server, where I have it mounted as /var/www. The setup is okay, except for when I come to create files on the NFS mount. I end up with files that cannot be read by Apache, or which cannot be modified by other users on my system. With Samba, I can specify force user, force group, create mask and directory mask, and this ensures that all files are created with suitable permissions for my Apache web server. I can't find a way to do this with NFS. Is there a way to force permissions and ownership with NFS - am I missing something obvious? Although I've spent quite a bit of time with Linux, and am weaning myself off Windows, I still haven't quite got to grip with Linux permissions... If this is not the right way to do things, I am open to alternative suggestions.

    Read the article

  • Software RAID 1 Configuration

    - by Corve
    I have created a software RAID 1 quite some while ago and it always seemed to work for me. However I am not completely sure that I have configured everything right and do not have the experience to check so I would be very grateful for some advice or just verification that all seems right so far. I am using Linux Fedora 20 (32 bit with plans to upgrade to 64bit) The RAID 1 should consist of two 1TB SATA hard drives. This is the output of mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Jan 29 11:25:18 2012 Raid Level : raid1 Array Size : 976761424 (931.51 GiB 1000.20 GB) Used Dev Size : 976761424 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Sat Jun 7 10:38:09 2014 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : argo:0 (local to host argo) UUID : 1596d0a1:5806e590:c56d0b27:765e3220 Events : 996387 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 0 1 active sync /dev/sda The RAID is mounted successfully: friedrich@argo:~ ? sudo mount -l | grep md0 /dev/md0 on /mnt/raid type ext4 (rw,relatime,data=ordered) Basically my question are: Why do I only have 1 active device? What does the State removed at bottom mean? Also I noticed some strange error messages that I see on the console on system start and shutdown and always repeating in the background when I switch with Ctrl + Alt + F2: ... ata2: irq_stat 0x00000040 connection status changed ata2: SError: { CommWake DevExch } ata2: COMRESET failed (errno=-32) ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ata2: irq_stat 0x00000040 connection status changed ata2: SError: { CommWake DevExch } ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ... Are these errors related to the RAID? Something seems wrong with the SATA devices.. All together the system works (I can read and write to the mounted raid) but I always had these strange errors on startup shutdown (probably always in the background). Thx for your help

    Read the article

  • All of the NTFS hard links disappear, where are hardlinks stored on disk and how to recover them?

    - by Osiris
    This is Windows 7 x64 sp1 on a NTFS file system. All hardlinks within C:\Windows\System32 folder disappear, and the Windows can't boot, because even the osloader, C:\Windows\System32\boot\Winload.exe also disappeared. Nevertheless, the original files are still located in the corresponding C:\Windows\winsxs folders. After booting into the Recovery Environment, and copied one Winload.exe (x64) from other folder, Windows gave an error pointing out that "ntoskrnl.exe is corrupted or missing...its file digital signature cannot be verified" In trying to boot in Safe Mode, the message above was shown after a screen prompting "Loaded \Windows\system32\config\system" Because at this early booting stage, smss.exe was still not loaded, so there is not any dumping and logs. Based on my study, ntoskrnl.exe depends on the following files: C:\\windows\\system32\\PSHED.DLL C:\\Windows\\System32\\hal.dll C:\\Windows\\System32\\kdcom.dll C:\\Windows\\System32\\clfs.sys C:\\Windows\\System32\\ci.dll All those files above are copied from their corresponding folders and verified their md5 with a well-operating Windows 7 x64 SP1. But the booting error is still the same: "ntoskrnl.exe is corrupted or missing..." **Background:** Before the reboot, there was an windows update going on. Then something unknown happen, almost all processes were broken to run, including the windows task manager, taskmgr.exe. After mount the hard disk to other computer, it seems that all hardlinks within C:\Windows\System32 folder were gone. I tried several data recovery software, but they are not be able to find those disappeared NTFS hard links. So the question is: Where are information about those hard links stored? And how to recover them? Are they depend on some windows service or stored in the registry?

    Read the article

  • All of the NTFS hard links damaged, where are hardlinks stored and how to recover them?

    - by String Xu
    This is Windows 7 x64 sp1 on a NTFS file system. All hardlinks within C:\Windows\System32 folder disappear, and the Windows can't boot, because even the osloader, C:\Windows\System32\boot\Winload.exe also disappeared. Nevertheless, the original files are still located in the corresponding C:\Windows\winsxs folders. After booting into the Recovery Environment, and copied one Winload.exe (x64) from other folder, Windows gave an error pointing out that "ntoskrnl.exe is corrupted or missing...its file digital signature cannot be verified" In trying to boot in Safe Mode, the message above was shown after a screen prompting "Loaded \Windows\system32\config\system" Because at this early booting stage, smss.exe was still not loaded, so there is not any dumping and logs. Based on my study, ntoskrnl.exe depends on the following files: C:\windows\system32\PSHED.DLL C:\Windows\System32\hal.dll C:\Windows\System32\kdcom.dll C:\Windows\System32\clfs.sys C:\Windows\System32\ci.dll All those files above are copied from their corresponding folders and verified their md5 with a well-operating Windows 7 x64 SP1. But the booting error is still the same: "ntoskrnl.exe is corrupted or missing..." Background: 1. Before the reboot, there was an windows update going on. Then something unknown happen, almost all processes were broken to run, including the windows task manager, taskmgr.exe. After mount the hard disk to other computer, it seems that all hardlinks within C:\Windows\System32 folder were gone. I tried several data recovery software, but they are not be able to find those disappeared NTFS hard links. So the question is: Where are information about those hard links stored? And how to recover them? Are they depend on some windows service or stored in the registry?

    Read the article

  • logfile deleted on Oracle database how to re-create it?

    - by Daniel
    for my database assignment we were looking into 'database corruption' and I was asked to delete the second redo log file which I have done with the command: rm log02a.rdo this was in the $HOME/ORADATA/u03 directory. Now I started up my database using startup pfile=$PFILE nomount then I mounted it using the command alter database mount; now when I try to open it alter database open; it gives me the error: ORA-03113: end-of-file on communication channel Process ID: 22125 Session ID: 25 Serial number: 1 I am assuming this is because the second redo log file is missing. There is still log01a.rdo, but not the one I have deleted. How can I go about recovering this now so that I can open my database again? I have looked into the database created scripts, and it specified the log02a.rdo file to be size 10M and part of group 2. If I do select group#, member from v$logfile; I get: 1 /oradata/student_db/user06/ORADATA/u03/log01a.rdo 2 /oradata/student_db/user06/ORADATA/u03/log02a.rdo 3 /oradata/student_db/user06/ORADATA/u03/log03a.rdo 4 /oradata/student_db/user06/ORADATA/u03/log04a.rdo So it is part of group 2. If I try to add the log02a.rdo file again "already part of the database". If I drop group 2 and then add it again with these commands: ALTER DATABASE ADD LOGFILE GROUP 2 ('$HOME/ORADATA/u03/log02a.rdo') SIZE 10M; Nothing. Supposedly alters the database, but it still won't start up. Any ideas what I can do to re-create this and be able to open my database again?

    Read the article

  • Rackspace Ubuntu 12.04 server stuck in initramfs after kernel upgrade

    - by Znarkus
    Can't boot after I did a aptitude full-upgrade and let it update menu.lst (did a diff first and it looked good). This is what I've done so far in the BusyBox shell: mkdir /tmp/xvda1 mount /dev/xvda1 /tmp/xvda1 chroot /dev/xvda1 nano /boot/grub/menu.lst This file looks like this: title Ubuntu 12.04.1 LTS, kernel 3.2.0-31-virtual root(hd0,0) kernel /boot/vmlinuz-3.2.0-31-virtual root=UUID=/dev/xvda1 ro quiet splash initrd /boot/initrd.img-3.2.0-31-virtual title Ubuntu 12.04.1 LTS, kernel 3.2.0-31-virtual (recovery mode) root(hd0,0) kernel /boot/vmlinuz-3.2.0-31-virtual root=UUID=/dev/xvda1 ro single initrd /boot/initrd.img-3.2.0-31-virtual titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-virtual root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-virtual root=UUID=/dev/xvda1 ro quiet splash initrd/boot/initrd.img-3.2.0-24-virtual titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-virtual (recovery mode) root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-virtual root=UUID=/dev/xvda1 ro single initrd/boot/initrd.img-3.2.0-24-virtual titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-generic root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-generic root=UUID=/dev/xvda1 ro quiet splash initrd/boot/initrd.img-3.2.0-24-generic titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-generic (recovery mode) root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-generic root=UUID=/dev/xvda1 ro single initrd/boot/initrd.img-3.2.0-24-generic titleChainload into GRUB 2 root(hd0,0) kernel/boot/grub/core.img titleUbuntu 12.04.1 LTS, memtest86+ root(hd0,0) kernel/boot/memtest86+.bin From what I remember, the upgrade added the UUID= string. Should I remove these? Or rather, how do I get my system back online again? Thanks. Update: Seems like I can't even edit the file. [ Error writing /boot/grub/menu.lst: Read-only file system ]

    Read the article

  • P2V Wouldn't Boot, Rebuilt initrd, Need to Clean Up

    - by Mike Soule
    We have a CentOS 5.4 server (build 2.6.18-164.el5xen). We went to P2V this server so we can have redundancy, the physical only has one PSU. The P2V only completed 99% of the way, we have a VMWare ticket opened, but they marked the ticket as low priority. I was able to boot into a rescue disc of Red Hat 5.4 and rebuild the initrd with the help of this blog post. Now the only issue is the original server had a modified initrd, which was also from a different OS build and made by an outside provider. We do not have a document outlining modifications. My question is, is it at all possible to copy the initrd off of the physical server and replace it on the virtual and some how have the virtual machine boot? Thanks for any input. Edit: I copied the initrd img from the physical and it recreated the original issue. Here is a screen capture of the error. http://i.imgur.com/MqC73.jpg Edit2: echo Scanning logical volumes lvm vgscan --ignorelockingfailure echo Activating logical volumes lvm vgchange -ay --ignorelockingfailure VolGroup00 resume /dev/VolGroup00/LogVol01 echo Creating root device. mkrootdev -t ext3 -o defaults,ro /dev/VolGroup00/LogVol00 echo Mounting root filesystem. mount /sysroot

    Read the article

  • Boot records messed on dual boot (win7 and ubuntu) machine with SSD and HDD

    - by Michael
    i have a lenovo ideapad y570 with two hard drives: SSD and normal HDD both managed by RapidDrive and windows 7 pre-installed. First, i have shrunk my 500 GB HDD a little bit to make some place for a linux installation. Then i installed linux mint 12 to it, also installed grub onto the drive (dev/sdb). Installation programm has not allowed me to install grub on sda. Then i replaced linux mint with ubuntu 12.04 but installed grub onto the SSD (which is dev/sda and was the default-option). After that i could boot into my windows, only ubuntu worked. So i did a research, and tried: rewriting mbr of windows into sda1, reinstalling grub, replacing grub2 with grub-legacy, and now i think my partitions table are totally messed. Here is fdisk -l output: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 * 2048 411647 204800 7 HPFS/NTFS/exFAT /dev/sda2 411648 1009430959 504509656 7 HPFS/NTFS/exFAT Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x5e5d1cc8 Device Boot Start End Blocks Id System /dev/sdb1 * 1979 884389887 442193954+ 12 Compaq diagnostics /dev/sdb2 884391934 976771071 46189569 5 Extended /dev/sdb5 884391936 937705471 26656768 83 Linux /dev/sdb6 937707520 967006207 14649344 83 Linux /dev/sdb7 967008256 976771071 4881408 82 Linux swap / Solaris I also cant mount any windows partitions to recover data. And when i open gparted, the whole sda-disk appears unallocated and it states "can not have a partition outside the disk!", also the end-sector address of /dev/sda2 confuses me. If i boot from the SSD, it throws some mbr error and wont boot, if i boot from the HDD, i only get the grub bash. How do i restore the partition tables? I can boot only from a live-cd at the machine. Thanks for any help.

    Read the article

  • Why is windows not able to create a system partition?

    - by hughes
    I'm reinstalling Windows 7 64 bit, and I encountered an issue I've never seen before. I have a legit copy of Win 64 Professional, and I've installed it probably a half dozen times on this machine in the past without a problem. Googling the error only brings me to issues with people who are upgrading to win7. The drive itself seems to not have a problem. I can mount it on other systems and I can create an NTFS partition on it on other machines. I can install Ubuntu on it without any issues. Additionally, if I try using my alternate backup hard drive, the installer gives the same error. I have run diskpart from the setup page and clean seems to report that all is well. However, I cannot get past the screen below, which says Setup was unable to create a new system partition or locate an existing system partition. This happens regardless of whether or not the disk space is already allocated. What is causing this? How do I solve or get past this?

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >