Search Results

Search found 2282 results on 92 pages for 'filesystem'.

Page 31/92 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • Error number 13 - Remote access svn with dav_svn failing

    - by C. Ross
    I'm getting the following error on my svn repository <D:error> <C:error/> <m:human-readable errcode="13"> Could not open the requested SVN filesystem </m:human-readable> </D:error> I've followed the instructions from the How to Geek, and the Ubuntu Community Page, but to no success. I've even given the repository 777 permissions. <Location /svn/myProject > # Uncomment this to enable the repository DAV svn # Set this to the path to your repository SVNPath /svn/myProject # Comments # Comments # Comments AuthType Basic AuthName "My Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd # More Comments </Location> The permissions follow: drwxrwsrwx 6 www-data webdev 4096 2010-02-11 22:02 /svn/myProject And svnadmin validates the directory $svnadmin verify /svn/myProject/ * Verified revision 0. and I'm accessing the repository at http://ipAddress/svn/myProject Edit: The apache error log says [Fri Feb 12 13:55:59 2010] [error] [client <ip>] (20014)Internal error: Can't open file '/svn/myProject/format': Permission denied [Fri Feb 12 13:55:59 2010] [error] [client <ip>] Could not fetch resource information. [500, #0] [Fri Feb 12 13:55:59 2010] [error] [client <ip>] Could not open the requested SVN filesystem [500, #13] [Fri Feb 12 13:55:59 2010] [error] [client <ip>] Could not open the requested SVN filesystem [500, #13] Even though I confirmed that this file is ugo readable and writable. What am I doing wrong?

    Read the article

  • Can the Subversion client (svn) derefence symbolic links as if they were files?

    - by Ryan B. Lynch
    I have a directory on a Linux system that mostly contains symlinks to files on a different filesystem. I'd like to add the directory to a Subversion repository, dereferencing the symlinks in the process (treating them as the files they point to, rather than links). Generally, I'd like to be able to handle any working-copy operations with this behavior, but the 'svn add' command is where it starts, I think. The SVN client utility doesn't appear to have any options related to symlink dereferencing in the working copy. I didn't find any references to this in the manual (http://svnbook.red-bean.com/en/1.5/index.html), either. I found a poster on the SVN users mailing list who asked the same question but never received an answer, here: http://markmail.org/message/ngchfnzlmm43yj7h (That poster ended up using hard links instead of symlinks. That technique is not an option, in my case, because the real underlying files reside on a separate filesystem.) I'm using Subversion v1.6.1 on Fedora 11. For what it's worth, I know that there are alternative tools/techniques that could help approximate this behavior, but which I have to discard for various reasons. I've already considered [and dust-binned] these possibilities: - a "union" mount, merging all of the the directories containing the real files, with the SVN working-copy directory as the "top" layer in the union; - copying/moving the real files to the same filesystem as the SVN working-copy, and using hardlinks instead of symlinks; - non-SVN version control systems. These were all neat ideas, and I'm sure they are good solutions to other problems, but they won't work given the constraints of this environment and situation.

    Read the article

  • Ubuntu Server - Power failure leads to boot failure

    - by Ali Nadalizadeh
    I have installed Ubuntu Server 10.04.1 LTS on an ext4 partition. Whenever my system looses power suddenly, It doesn't boot into the normal procedure to fix the problems automatically, but switches to the busy box shell (where it says Kernel Panic : No init found) So I guess kernel is refusing to mount the filesystem when it is not clean, since when I boot up using a Live CD and fsck it, it boots up correctly. How can I force kernel to mount the filesystem, even if it is not clean ?, so that automated fsck on system startup fixes the problems... (or it's a grub problem ?) K-V : 2.6.32-26-generic-pae #48-Ubuntu SMP

    Read the article

  • The executable is absent yet I can execute the command on the command line

    - by sharptooth
    There's such utility for Windows developers called regtlib. I have three computers - one with WinXP, another two with Win2k3. If I run built-in Windows search for file with wildcard regtlib* on the whole filesystem search finds nothing on all three computers. If I try to execute regtlib on WinXP command line it says it can't find such a file or built-in command. The same on one of the two Win2k3 computers. But when I do that on the other Win2k3 computer I see typical regtlib output. What happens? What is the magic that invokes regtlib without the file being present on the filesystem?

    Read the article

  • kernel panic after LVM setup

    - by Manuel Sopena Ballesteros
    I broke my webserver... My setup is: VMWare ESXi environemt CPanel installed CentOS release 6.5 (Final) 4 CPUs 2G RAM 2x VM disks 100G each LVM system This was my previous storage settings (the server was working fine at this time): # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 95G 1.4G 88G 2% / tmpfs 939M 0 939M 0% /dev/shm /dev/sdb1 99G 188M 94G 1% /tmp /dev/sda1 485M 54M 407M 12% /boot My web developer asked me to merge /tmp and / disks so this is what I did: Delete /dev/sdb1 partition using fdisk Create a new partition as LVM on /dev/sdb1 using fdisk Create a new physical volume -- pvcreate /dev/sdb1 Extend volume group -- vgextend /dev/sdb1 vg_test01 Extend logical volume -- lvextend -l +100%FREE /dev/vg_test01/lv_root Resize filesystem -- resize2fs /dev/vg_test01/lv_root This is the new configuration: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 213G 105G 97G 52% / tmpfs 939M 0 939M 0% /dev/shm /dev/sda1 485M 54M 407M 12% /boot /usr/tmpDSK 4.0G 145M 3.6G 4% /tmp Since I have the new settings my web server is throwing kernel panics quite often (around every 2 days). The message says: INFO: task <taskName>:<pid> blocked for more than 120 seconds. The list of process affected that I can see from the console are: mysqld queueprocd httpd suphp vmtoolsd loop0 auditd The only way I can fix this is reseting (cold reboot) the VM. I don't think it is a hardware issue as sar is not showing any bottleneck: Linux 2.6.32-431.3.1.el6.x86_64 (test01) 08/22/2014 _x86_64_ (4 CPU) 12:00:01 AM CPU %user %nice %system %iowait %steal %idle 12:10:01 AM all 26.86 0.01 0.98 0.57 0.00 71.57 12:20:01 AM all 1.78 0.02 1.03 0.08 0.00 97.09 12:30:01 AM all 26.34 0.02 0.85 0.05 0.00 72.74 12:40:01 AM all 27.12 0.01 1.11 1.22 0.00 70.54 12:50:01 AM all 1.59 0.02 0.94 0.13 0.00 97.32 01:00:01 AM all 26.10 0.01 0.77 0.04 0.00 73.07 01:10:01 AM all 27.51 0.01 1.16 0.14 0.00 71.18 01:20:01 AM all 1.80 0.07 1.06 0.08 0.00 96.99 01:30:01 AM all 26.19 0.01 0.78 0.05 0.00 72.96 01:40:01 AM all 26.62 0.02 0.87 0.05 0.00 72.45 01:50:02 AM all 1.35 0.01 0.87 0.02 0.00 97.75 02:00:01 AM all 26.11 0.02 0.69 0.02 0.00 73.17 02:10:01 AM all 26.73 0.02 0.89 0.14 0.00 72.21 02:20:01 AM all 1.45 0.01 0.92 0.04 0.00 97.58 02:30:01 AM all 26.59 0.01 1.06 0.03 0.00 72.31 02:40:01 AM all 26.27 0.01 0.72 0.05 0.00 72.95 02:50:01 AM all 0.86 0.01 0.50 0.09 0.00 98.53 03:00:01 AM all 25.61 0.02 0.39 0.03 0.00 73.96 03:10:01 AM all 26.30 0.08 0.66 0.14 0.00 72.82 03:20:01 AM all 0.81 0.01 0.51 0.04 0.00 98.63 03:30:02 AM all 26.15 0.02 0.53 0.07 0.00 73.24 03:40:01 AM all 26.06 0.01 0.47 0.04 0.00 73.42 03:50:01 AM all 0.96 0.02 0.51 0.03 0.00 98.48 Average: all 17.69 0.02 0.79 0.14 0.00 81.36 06:58:14 AM LINUX RESTART 07:00:01 AM CPU %user %nice %system %iowait %steal %idle 07:10:01 AM all 1.04 0.02 0.57 0.95 0.00 97.42 07:20:02 AM all 0.66 0.01 0.39 0.06 0.00 98.87 07:30:01 AM all 25.71 0.01 0.45 0.16 0.00 73.67 07:40:01 AM all 25.88 0.01 0.35 0.08 0.00 73.68 07:50:01 AM all 1.13 0.02 0.55 0.11 0.00 98.19 As you can see the server became unresponsive at 03.50 AM and I had to reset the VM at 06.58 AM to bring the website up again. I would appreciate any help/assistance to fix this issue. thank you very much

    Read the article

  • How can I resize a partition managed by LVM?

    - by Mike C
    I have a fresh CentOS install on my machine and I would like to make space on the drive available in order to install Arch Linux. Unfortunately, LVM is new to me and doesn't appear to work well with gParted (on my Ubuntu 9.0 LiveCD, anyways). It always seems to treat the LVM as some unknown filesystem. I tried to use the 'lvm' utility on the LiveCD in order to resize the partition down, but I ended up somehow corrupting my filesystem (hence the fresh CentOS install). I haven't been able to find any documentation on LVM that makes much sense to me as a *nix n00b. Is there anywhere I can find some helpful documentation on LVM as well as a clear step by step on how to successfully resize a partition? Thanks, Mike

    Read the article

  • Tool to copy IMAP folders from one server to another

    - by Barry Brown
    I need a Unix-based tool, such as a shell script or command-line program, to copy IMAP folders from one server to another. Ideally, the tool should copy all the folders for a single account (Inbox, Sent, Trash, and user-created folders) at once, rather than one folder at a time. It should preserve message dates. As an option, I'd like to be able to copy just a single IMAP folder. Alternatively, is there a tool to copy an mbox file to an IMAP server? I have direct access to the mbox files in the filesystem, but not to the filesystem of the remote IMAP server. Edit: Is there a way for a user to migrate their own questions to Server Fault?

    Read the article

  • osx 10.6.3 how does apache config work?

    - by w-
    hi, just got a macbook pro 15" so i'm unfamiliar with how the filesystem is laid out. i noticed when in my filesystem that i've got a few paths specifying httpd.conf /etc/apache2/httpd.conf /opt/local/apache2/conf/httpd.conf /private/etc/apache2/httpd.conf the confs are different in lots of ways-- e.g. user, group, server_root, modules that are loaded, etc. the apache2 folders themselves also greatly differ. it seems that the one getting used is either /etc/apache2/httpd.conf or /private/etc/apache2/httpd.conf i'm wondering if i might have messed up my system after installing some packages (php5, django, etc) via macports and maybe ended up with 2 apache2 instances. my questions are hence: - which httpd.conf is the one being used ? - what are the other files for? thanks

    Read the article

  • Why is (free_space + used_space) != total_size in df? [migrated]

    - by Timothy Jones
    I have a ~2TB ext4 USB external disk which is about half full: $ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc 1922860848 927384456 897800668 51% /media/big I'm wondering why the total size (1922860848) isn't the same as Used+Available (1825185124)? From this answer I see that 5% of the disk might be reserved for root, but that would still only take the total used to 1921328166, which is still off. Is it related to some other filesystem overhead? In case it's relevant, lsof -n | grep deleted shows no deleted files on this disk, and there are no other filesystems mounted inside this one.

    Read the article

  • No space left on device with encrypted disk that takes all space

    - by Yosef
    I use Ubuntu 11.04. There's no space left on device. I have encrypted the disk that takes up space (maybe it's good to disable it, but I don't know how). In shell, I get this message: No space left on device I run df -I: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda1 3055616 602499 2453117 20% / none 210161 890 209271 1% /dev none 214789 8 214781 1% /dev/shm none 214789 53 214736 1% /var/run none 214789 3 214786 1% /var/lock /home/myuser/.Private 3055616 602499 2453117 20% /home/myuser df -I Edit: When I run only df: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 48060296 45618928 0 100% / none 1538340 684 1537656 1% /dev none 1547596 808 1546788 1% /dev/shm none 1547596 104 1547492 1% /var/run none 1547596 0 1547596 0% /var/lock /home/myuser/.Private 48060296 45618928 0 100% /home/myuser Edit: I thinking about few solution but I don't know which better and how exactly to do them: to enlarge partition size (I cant install gparted - no more disk space) remove encryption of partition - I really not need that

    Read the article

  • For kvm host images. GFS2 or OCFS2 with drbd8?

    - by yvess
    I want a shared filesystem on top of drbd8 on two nodes. The servers run ubuntu 9.10. I googled a lot, but couldn't find an clear trend what the web community prefers. It's seems that OCFS2 is more used at the moment. Which filesystem is more reliable, faster? GFS2 or OCFS2? Is the linux community going more towards GFS2 or OCFS2? Which of this two is better supported by ubuntu 9.10? Are there better (or more common) alternatives?

    Read the article

  • Missing over 100GB of Space on sda1 RHEL

    - by WifiGhost
    I have a server setup with a RAID 5 using (3) 500GB drives, 1 as a spare so unused in the RAID. So in my mind i start out with 990GB with the RAID 5 in place. When looking at DF or the built in disk space utility i only see a total of about 882GB, how can i find where the 100+GB went? How can i get it back? I've checked the RAID 5 BIOS and i see all the space. I've tried looking manually and through terminal commands with no luck. Filesystem - 1K-blocks - Used Available - Use% - Mounted on /dev/mapper/vg_web-lv_root 838084192 48368700 747153060 7% / tmpfs 12104644 592 12104052 1% /dev/shm /dev/sda1 495844 121546 348698 26% /boot /dev/mapper/vg_web-lv_home 82569904 259136 78116468 1% /home Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_web-lv_root 800G 47G 713G 7% / tmpfs 12G 592K 12G 1% /dev/shm /dev/sda1 485M 119M 341M 26% /boot /dev/mapper/vg_web-lv_home 79G 254M 75G 1% /home

    Read the article

  • Out of disk space on 4GB partiton yet it's only using 2GB

    - by Camsoft
    I'm running Ubuntu and have had a problem where the root partition has run out of disk space. When I perform df -h I get the following: Filesystem Size Used Avail Use% Mounted on /dev/sda6 4.6G 4.5G 0 100% / Yet there are only 2GB of files actually using up this partition. I then ran the following df -i and I get the following: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda6 305824 118885 186939 39% / I have no idea what the -i flag does but it clearly shows that only 39% is used. Can anyone explain where my disk space has gone?

    Read the article

  • Why use multiple partitions on a rhel server?

    - by Jakobud
    I'm about to reformat and reinstall CentOS onto an old server. The server runs on a modest 30 node small business network and has a variety of responsibilities including MySQL, a Samba share, DHCPd & SVN/Trac. The old sysadmin had this server setup with almost a dozen different partitions for various things. I'm trying to understand what the advantages of multiple partitions are as opposed to a just one filesystem at /. Speed? Flexibility? Security? It seems like if you misjudge the necessary size for any given partition and it ends up filling up too fast, it requires a sysadmin to go in and expand the partition, etc... Seems like it would be easier if everything was just one flat / filesystem. But I'm sure there are some advantages I'm not aware of. The server is currently running a handful of HDDs raided to ~2TB (raid 0).

    Read the article

  • How to clean up an unprocessed orphan inode list?

    - by bmk
    I tried to mount a formerly readonly mounted filesystem read-writeable: mount -o remount,rw /mountpoint Unfortunately it did not work: mount: /mountpoint not mounted already, or bad option dmesg reports: [2570543.520449] EXT4-fs (dm-0): Couldn't remount RDWR because of unprocessed orphan inode list. Please umount/remount instead A umount does not work, too: umount /mountpoint umount: /mountpoint: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) Unfortunately neither lsof of fuser don't show any process accessing something located under the mount point. So - how can I clean up this unprocessed orphan list to be able to mount the filesystem again without rebooting the computer?

    Read the article

  • Retrieving a specific value from “df -h” using shell

    - by diegodias
    When I use df -h, I get the following output: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 59G 2.2G 54G 4% / /dev/sda1 122M 38M 78M 33% /boot tmpfs 1.1G 0 1.1G 0% /dev/shm 10.10.0.105:/somepath 11T 8.4T 2.1T 81% /storage4 10.11.0.101:/somepath 15T 8.9T 5.9T 61% /storage1 /dev/mapper/patha 5.0T 255G 4.8T 5% /storage5_vol0 /dev/mapper/pathb 5.0T 195G 4.9T 4% /storage5_vol1 /dev/mapper/pathc 5.0T 608G 4.5T 12% /storage5_vol2 I want to write a script that gets the value of Avail column on a specific storage. I used to use df -k /storage_name | tail -1 | awk '{print $3}' But the FileSystem column can have a value or not .. which would change the variable of my script from $3 to $4. How can I get the Avail on a single command line even if there are no values on the previous columns?

    Read the article

  • RHEL5: Can't create sparse file bigger than 256GB in tmpfs

    - by John Kugelman
    /var/log/lastlog gets written to when you log in. The size of this file is based off of the largest UID in the system. The larger the maximum UID, the larger this file is. Thankfully it's a sparse file so the size on disk is much smaller than the size ls reports (ls -s reports the size on disk). On our system we're authenticating against an Active Directory server, and the UIDs users are assigned end up being really, really large. Like, say, UID 900,000,000 for the first AD user, 900,000,001 for the second, etc. That's strange but should be okay. It results in /var/log/lastlog being huuuuuge, though--once an AD user logs in lastlog shows up as 280GB. Its real size is still small, thankfully. This works fine when /var/log/lastlog is stored on the hard drive on an ext3 filesystem. It breaks, however, if lastlog is stored in a tmpfs filesystem. Then it appears that the max file size for any file on the tmpfs is 256GB, so the sessreg program errors out trying to write to lastlog. Where is this 256GB limit coming from, and how can I increase it? As a simple test for creating large sparse files I've been doing: dd if=/dev/zero of=sparse-file bs=1 count=1 seek=300GB I've tried Googling for "tmpfs max file size", "256GB filesystem limit", "linux max file size", things like that. I haven't been able to find much. The only mention of 256GB I can find is that ext3 filesystems with 2KB blocks are limited to 256GB files. But our hard drives are formatted with 4K blocks so that doesn't seem to be it--not to mention this is happening in a tmpfs mounted ON TOP of the hard drive so the ext3 partition shouldn't be a factor. This is all happening on a 64-bit Red Hat Enterprise Linux 5.4 system. Interestingly, on my personal development machine, which is a 32-bit Fedora Core 6 box, I can create 300GB+ files in tmpfs filesystems no problem. On the RHEL5.4 systems it is no go.

    Read the article

  • Linux - Help, I'm running out of inodes!

    - by Rory McCann
    I have a filesystem that has lots of small files. Currently about 80% of inodes are used (I checked with df -i), however only 60% of disk space is used. How can I 'increase' the number of inodes? If it was just disk space, I know that I could just increase the size of the disk (this disk is on LVM). If I increase the size of the disk, will that make me have more inodes? I'm willing to grow the filesystem this disk is on, if that'd help.

    Read the article

  • Skipping hardlinks when using TSM Backup

    - by Lars Haugseth
    We need to backup a filesystem with lots of hardlinks. Since there are several hardlinks for each "true" file, we would like to skip all the hardlinks when backing up the filesystem to avoid n exact copies of each file. The backup is done using Tivoli Storage Manager Backup, and we've been unable to get it to treat hardlinks as anything other than separate files to be backed up alongside each other. In case it's relevant for possible solutions, I'd like to note that it's possible to tell a hardlink from a proper file by the filename: foobarbaz-123.ext # file foobarbaz-123-1.ext # hardlink foobarbaz-123-2.ext # hardlink barbazfoo-456.ext # file barbazfoo-456-1.ext # hardlink barbazfoo-456-2.ext # hardlink barbazfoo-456-3.ext # hardlink That is, all hardlinks have two hyphens in the filename, where as proper files have just the one. The server is running Ubuntu Linux, and the files are situated on a gfs volume on our SAN.

    Read the article

  • Format as NTFS without Journal

    - by palswim
    I have a flash drive that I'd like to format for use in Windows. I would like support for symbolic links, so I can't use FAT/FAT32/exFAT. I would prefer to use the ext4 filesystem and disable journaling, with the Ext2Fsd filesystem driver, but have (so far) found that I can't make soft links across filesystems that Windows will read, Ext2Fsd has an annoying bug about always mounting partitions as read-only and has problems resuming from sleep, and some programs have problems writing to the partition even after manually configuring Ext2Fsd to allow writes. So, I would like to use NTFS for the flash drive, but disable the journaling feature (causes extra writes), if possible. How can I do this?

    Read the article

  • Recover data from an ''unpartitioned'' hard drive

    - by Rafael S. Calsaverini
    I'm trying to recover data from a hdd for a friend from work. He was using it on an old win98 PC (so I guess it was a FAT 16 filesystem). When he installed the drive on a new PC his Windows XP can't recognize the filesystem and give an error message saying that the drive is unformatted. I tried to mount the hdd under linux but no partitions appear to be associated with the drive (I have only /dev/sdb associated with that drive and no /dev/sdb1 or sdb2 etc). I've found many articles on the web on how to recover partitions (with scripts like dd and ddrescue) but how do I make it when I have no partitions and the system say my drive is unpartioned? Is it possible to create a new partition without loosing the data?

    Read the article

  • How to mirror filesystems with millions of hardlinks?

    - by Thomas Berger
    We have one big problem at the moment: We need to mirror a filesystem for one of our customers. Thats usual not really a problem, but here it is: On this filesystem there is one folder with millions of hardlinks (yes! MILLIONS!). rsync requires more then 4 days to just build the filelist. We use the following rsync options: rsync -Havz --progress serverA:/data/cms /data/ Has anyone a idea how to speed up this rsync, or use alternatives? We could not use dd as the target disk is smaller then the source.

    Read the article

  • Unable to access intel fake RAID 1 array in Fedora 14 after reboot

    - by Sim
    Hello everyone, 1st I am relatively new to linux (but not to *nix). I have 4 disks assembled in the following intel ahci bios fake raid arrays: 2x320GB RAID1 - used for operating systems md126 2x1TB RAID1 - used for data md125 I have used the raid of size 320GB to install my operating system and the second raid I didn't even select during the installation of Fedora 14. After successful partitioning and installation of Fedora, I tried to make the second array available, it was possible to make it visible in linux with mdadm --assembe --scan , after that I created one maximum size partition and 1 maximum size ext4 filesystem in it. Mounted, and used it. After restart - a few I/O errors during boot regarding md125 + inability to mount the filesystem on it and dropped into repair shell. I commented the filesystem in fstab and it booted. To my surprise, the array was marked as "auto read only": [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md125 : active (auto-read-only) raid1 sdc[1] sdd[0] 976759808 blocks super external:/md127/0 [2/2] [UU] md127 : inactive sdc[1](S) sdd[0](S) 4514 blocks super external:imsm md126 : active raid1 sda[1] sdb[0] 312566784 blocks super external:/md1/0 [2/2] [UU] md1 : inactive sdb[1](S) sda[0](S) 4514 blocks super external:imsm unused devices: <none> [root@localhost ~]# and the partition in it was not available as device special file in /dev: [root@localhost ~]# ls -l /dev/md125* brw-rw---- 1 root disk 9, 125 Jan 6 15:50 /dev/md125 [root@localhost ~]# But the partition is there according to fdisk: [root@localhost ~]# fdisk -l /dev/md125 Disk /dev/md125: 1000.2 GB, 1000202043392 bytes 19 heads, 10 sectors/track, 10281682 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1b238ea9 Device Boot Start End Blocks Id System /dev/md125p1 2048 1953519615 976758784 83 Linux [root@localhost ~]# I tried to "activate" the array in different ways (I'm not experienced with mdadm and the man page is gigantic so I was only browsing it looking for my answer) but it was impossible - the array would still stay in "auto read only" and the device special file for the partition it will not be in /dev. It was only after I recreated the partition via fdisk that it reappeared in /dev... until next reboot. So, my question is - How do I make the array automatically available after reboot? Here is some additional information: 1st I am able to see the UUID of the array in blkid: [root@localhost ~]# blkid /dev/sdc: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/sdd: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/md126p1: UUID="60C8D9A7C8D97C2A" TYPE="ntfs" /dev/md126p2: UUID="3d1b38a3-b469-4b7c-b016-8abfb26a5d7d" TYPE="ext4" /dev/md126p3: UUID="1Msqqr-AAF8-k0wi-VYnq-uWJU-y0OD-uIFBHL" TYPE="LVM2_member" /dev/mapper/vg00-rootlv: LABEL="_Fedora-14-x86_6" UUID="34cc1cf5-6845-4489-8303-7a90c7663f0a" TYPE="ext4" /dev/mapper/vg00-swaplv: UUID="4644d857-e13b-456c-ac03-6f26299c1046" TYPE="swap" /dev/mapper/vg00-homelv: UUID="82bd58b2-edab-4b4b-aec4-b79595ecd0e3" TYPE="ext4" /dev/mapper/vg00-varlv: UUID="1b001444-5fdd-41b6-a59a-9712ec6def33" TYPE="ext4" /dev/mapper/vg00-tmplv: UUID="bf7d2459-2b35-4a1c-9b81-d4c4f24a9842" TYPE="ext4" /dev/md125: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/sda: TYPE="isw_raid_member" /dev/md125p1: UUID="420adfdd-6c4e-4552-93f0-2608938a4059" TYPE="ext4" [root@localhost ~]# Here is how /etc/mdadm.conf looks like: [root@localhost ~]# cat /etc/mdadm.conf # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md1 UUID=89f60dee:e46a251f:7475814b:d4cc19a9 ARRAY /dev/md126 UUID=a8775c90:cee66376:5310fc13:63bcba5b ARRAY /dev/md125 UUID=b9a1149f:ae114fc8:a6000d77:354dc42a [root@localhost ~]# here is how /proc/mdstat looks like after I recreate the partition in the array so that it becomes available: [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md125 : active raid1 sdc[1] sdd[0] 976759808 blocks super external:/md127/0 [2/2] [UU] md127 : inactive sdc[1](S) sdd[0](S) 4514 blocks super external:imsm md126 : active raid1 sda[1] sdb[0] 312566784 blocks super external:/md1/0 [2/2] [UU] md1 : inactive sdb[1](S) sda[0](S) 4514 blocks super external:imsm unused devices: <none> [root@localhost ~]# Detailed output regarding the array in subject: [root@localhost ~]# mdadm --detail /dev/md125 /dev/md125: Container : /dev/md127, member 0 Raid Level : raid1 Array Size : 976759808 (931.51 GiB 1000.20 GB) Used Dev Size : 976759940 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2 Update Time : Fri Jan 7 00:38:00 2011 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 30ebc3c2:b6a64751:4758d05c:fa8ff782 Number Major Minor RaidDevice State 1 8 32 0 active sync /dev/sdc 0 8 48 1 active sync /dev/sdd [root@localhost ~]# and /etc/fstab, with /data commented (the filesystem that is on this array): # # /etc/fstab # Created by anaconda on Thu Jan 6 03:32:40 2011 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg00-rootlv / ext4 defaults 1 1 UUID=3d1b38a3-b469-4b7c-b016-8abfb26a5d7d /boot ext4 defaults 1 2 #UUID=420adfdd-6c4e-4552-93f0-2608938a4059 /data ext4 defaults 0 1 /dev/mapper/vg00-homelv /home ext4 defaults 1 2 /dev/mapper/vg00-tmplv /tmp ext4 defaults 1 2 /dev/mapper/vg00-varlv /var ext4 defaults 1 2 /dev/mapper/vg00-swaplv swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 [root@localhost ~]# Thanks in advance to everyone that even read this whole issue :-)

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >