Search Results

Search found 952 results on 39 pages for 'fs'.

Page 31/39 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • How to verify PostgreSQL 9 has installed correctly on a CentOS server?

    - by A4J
    I'm trying to install the PG (postgres) gem on a CentoOS server, but it keeps saying Postgres is too old, even though I have upgraded it to 9.1.3 (as per the instructions here http://www.davidghedini.com/pg/entry/install_postgresql_9_on_centos). I am using CentOS 5.8 (and Ruby 1.9.3) Here is the error message: Building native extensions. This could take a while... ERROR: Error installing pg: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for pg_config... yes Using config values from /usr/bin/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for pg_config_manual.h... yes checking for PQconnectdb() in -lpq... yes checking for PQconnectionUsedPassword()... no Your PostgreSQL is too old. Either install an older version of this gem or upgrade your database. *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. psql --version confirms my version: psql (PostgreSQL) 9.1.3 I can confirm packages installed: Setting up Install Process Package postgresql91-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-devel-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-server-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-libs-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-contrib-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Nothing to do Any ideas on how to troubleshoot this? Thanks in advance.

    Read the article

  • Can't install NPM after installing Node on EC2 Linux instance?

    - by frequent
    I'm trying my first attempt on getting a node server set up on an amazon ec2 linux instance. I think I made it quite far. First problem I ran into was when trying to make Node the connection timed out after a while, so I need three attempts until I got this: LINK(target) /home/ec2-user/node/out/Release/node: Finished touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_header.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_provider.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_ustack.stamp touch /home/ec2-user/node/out/Release/obj.target/node_etw.stamp make[1]: Leaving directory `/home/ec2-user/node/out' ln -fs out/Release/node node Which tells me, "Node is done", although I'm not sure it is also working as it should. Following this,this and this tutorial, I'm now stuck at installing npm. I think I first cloned into the wrong folder, which always gave me error 127, but even if I'm doing this: cd ~ git clone git://github.com/isaacs/npm.git cd npm sudo -s PATH=/usr/local/bin:$PATH make install I'm still getting this: #after cloning# make[1]: Entering directory `/root/npm' node cli.js install bash: node: command not found make[1]: *** [node_modules/.bin/ronn] Error 127 make[1]: Leaving directory `/root/npm' make: *** [man/man3/start.3] Error 2 Question:: Since I'm pretty much a newby at everything I'm trying here, can someone please tell me what I'm doing wrong and how to get npm to install? Also, in case I cloned into the wrong folder, is there a way to remove the "false clone" or is this not written to disk until I call make install and I don't need to worry? Thanks for helping out!

    Read the article

  • How does one change the UUID of a Volume on Mac OS X 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

  • How does one change the UUID of a Volume on Mac OSX 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

  • What benchmark tool to use to benchmark hardware for VM server?

    - by Mark0978
    We are setting up a new piece of hardware to virtualize several of our servers on. Choices are RAID 5, RAID 6, and RAID 0+1. We are wanting to benchmark all three before we go live with the machine, but I'm not sure how to test the speed. Since we will be using it to host VMs, what will the actual disk traffic look like? What can I use to see if RAID 6 is too slow? Short of setting up the system with all the VM's on it and running that way, then redoing on all the work, I'm not sure how to test it. It them becomes more of a subjective test than an objective one. I'm worried that RAID6 will have too much overhead, that RAID5 will be to fragile with 3TB drives and I've never worked with 0+1 at all. So in short I'd like to setup the base machine (which will be running Linux) and then test the underlying SW RAID for speed. What kind of tool exists to simulate this kind of load? Barring the lack of a specific tool, how about a generic FS testing tool that will simulate different loads?

    Read the article

  • Nginx + uWSGI + Django performance stuck on 100rq/s

    - by dancio
    I have configured Nginx with uWSGI and Django on CentOS 6 x64 (3.06GHz i3 540, 4GB), which should easily handle 2500 rq/s but when I run ab test ( ab -n 1000 -c 100 ) performance stops at 92 - 100 rq/s. Nginx: user nginx; worker_processes 2; events { worker_connections 2048; use epoll; } uWSGI: Emperor /usr/sbin/uwsgi --master --no-orphans --pythonpath /var/python --emperor /var/python/*/uwsgi.ini [uwsgi] socket = 127.0.0.2:3031 master = true processes = 5 env = DJANGO_SETTINGS_MODULE=x.settings env = HTTPS=on module = django.core.handlers.wsgi:WSGIHandler() disable-logging = true catch-exceptions = false post-buffering = 8192 harakiri = 30 harakiri-verbose = true vacuum = true listen = 500 optimize = 2 sysclt changes: # Increase TCP max buffer size setable using setsockopt() net.ipv4.tcp_rmem = 4096 87380 8388608 net.ipv4.tcp_wmem = 4096 87380 8388608 net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 net.core.netdev_max_backlog = 5000 net.ipv4.tcp_max_syn_backlog = 5000 net.ipv4.tcp_window_scaling = 1 net.core.somaxconn = 2048 # Avoid a smurf attack net.ipv4.icmp_echo_ignore_broadcasts = 1 # Optimization for port usefor LBs # Increase system file descriptor limit fs.file-max = 65535 I did sysctl -p to enable changes. Idle server info: top - 13:34:58 up 102 days, 18:35, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 118 total, 1 running, 117 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3983068k total, 2125088k used, 1857980k free, 262528k buffers Swap: 2104504k total, 0k used, 2104504k free, 606996k cached free -m total used free shared buffers cached Mem: 3889 2075 1814 0 256 592 -/+ buffers/cache: 1226 2663 Swap: 2055 0 2055 **During the test:** top - 13:45:21 up 102 days, 18:46, 1 user, load average: 3.73, 1.51, 0.58 Tasks: 122 total, 8 running, 114 sleeping, 0 stopped, 0 zombie Cpu(s): 93.5%us, 5.2%sy, 0.0%ni, 0.2%id, 0.0%wa, 0.1%hi, 1.1%si, 0.0%st Mem: 3983068k total, 2127564k used, 1855504k free, 262580k buffers Swap: 2104504k total, 0k used, 2104504k free, 608760k cached free -m total used free shared buffers cached Mem: 3889 2125 1763 0 256 595 -/+ buffers/cache: 1274 2615 Swap: 2055 0 2055 iotop 30141 be/4 nginx 0.00 B/s 7.78 K/s 0.00 % 0.00 % nginx: wo~er process Where is the bottleneck ? Or what am I doing wrong ?

    Read the article

  • Which linux distributions offer seamless support for UEFI and an LVM root out of the box?

    - by Jannik Jochem
    My new ultrabook (an Asus UX32VD) requires UEFI in order to boot from the internal harddisk. I use an LVM partition which contains my root fs and dual-boot Windows 8. I somehow managed to get this working on Sabayon Linux, however the overall process was pretty painful, and system upgrades keep breaking my configuration because everything depends on a hand-configured kernel and a hand-crafted GRUB2 configuration. This causes a lot of hassle and distractions for me, so I am considering to switch to a different distribution. However, I cannot find any concrete resources that precisely document the state of UEFI support in the popular distributions. As an example, the length of the Ubuntu wiki page on UEFI suggests that installing on UEFI systems is a non-trivial process, and this AskUbuntu thread on encrypted LVM on UEFI systems suggests that LVM might also be a problem. I know that this question seems somewhat open-ended, so I'll formulate concrete questions: Are there any Linux distributions with an installer that supports installing to an LVM root in a UEFI boot setting where Windows 8 is dual-booted? Which distributions support UEFI without having to jump through hoops in order to bootstrap into a UEFI-booted system or requiring manual configuration of the boot manager?

    Read the article

  • [Ubuntu 10.04] mdadm - Can't get RAID5 Array To Start

    - by Matthew Hodgkins
    Hello, after a power failure my RAID array refuses to start. When I boot I have to sudo mdadm --assemble --force /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 to get mdadm to notice the array. Here are the details (after I force assemble). sudo mdadm --misc --detail /dev/md0: /dev/md0: Version : 00.90 Creation Time : Sun Apr 25 01:39:25 2010 Raid Level : raid5 Used Dev Size : 1465135872 (1397.26 GiB 1500.30 GB) Raid Devices : 6 Total Devices : 6 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 17 23:02:38 2010 State : active, Not Started Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 128K UUID : 44a8f730:b9bea6ea:3a28392c:12b22235 (local to host hodge-fs) Events : 0.1249691 Number Major Minor RaidDevice State 0 8 65 0 active sync /dev/sde1 1 8 81 1 active sync /dev/sdf1 2 8 97 2 active sync /dev/sdg1 3 8 49 3 active sync /dev/sdd1 4 8 33 4 active sync /dev/sdc1 5 8 17 5 active sync /dev/sdb1 mdadm.conf: # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions /dev/sdb1 /dev/sdb1 # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # definitions of existing MD arrays ARRAY /dev/md0 level=raid5 num-devices=6 UUID=44a8f730:b9bea6ea:3a28392c:12b22235 Any help would be appreciated.

    Read the article

  • lxc bandwidth control using tc

    - by kumar
    I am trying to restrict bandwidth inside my containers. I have tried using the following commands , But I think it is not getting effective. cd /sys/fs/cgroup/net_cls/ echo 0x1001 > A/net_cls.classid # 10:1 echo 0x1002 > B/net_cls.classid # 10:2 tc qdisc add dev eth0 root \ handle 10: htb tc class add dev eth0 parent 10: \ classid 10:1 htb rate 40mbit tc class add dev eth0 parent 10: \ classid 10:2 htb rate 30mbit tc filter add dev eth0 parent 10: \ protocol ip prio 10 \ handle 1: cgroup Here A and B are containers created with this command. lxc-execute -n A -f configfile /bin/bash lxc-execute -n B -f configfile /bin/bash Whereas configfile contains only this entry: lxc.utsname = test_lxc AFter starting the container , I have started vsftpd inside container A and try to access the files using the ftp client from another machine. Then I killed vsftpd in container A and started vsftpd in container B and try to access the files using ftp client from another machine. I cannot observe any difference in performance, for that matter it is nowhere nearer to 40mbit/30mbit. Please correct me whether anything wrong here.

    Read the article

  • Hard drives (SATA/ATA) corrupting

    - by JC Denton
    Hello All, 2 years ago I bought relatives a new computer for christmas and installed Ubuntu on it. Ever since then it has been experiencing problems with the hard drives. The hard drive supplied with the machine was a SATA drive. When it appeared it was having problems (files and folders with invalid encoding started appearing) I replaced the SATA drive with the drive of their previous computer. I replaced it (The replacement) later on, the drive being rather old and thus becoming more prone to the risk of failure. The replacement drive is a IDE drive but the same problems started to appear (files and folders showing up in nautilus with invalid encoding). I fear the files and folders that are showing are existing FS entries, starting to corrupt. As it's happening to both the IDE and SATA drive it's unlikely to be the drives themselves or the IDE/SATA controller, I believe. Any ideas as to what could be causing the (assumed) corruption? EDIT: You're right about the paragraphs. They were there in edit mode but I'm still getting to grips with the whitespace format codes. The system is a "Primo Pro" AMD Phenom II X4 Quad Core 920 2.80GHz SILENT DDR2, ordered from overclockers.co.uk and nothing has been added to it except for the replacement of the SATA drive with an AMD drive. It would seem unlikely for a barebones system to be underpowered.

    Read the article

  • Have an Input/output error when connecting to a server via ssh

    - by Shehzad009
    Hello I seem to be having a problem while connecting to a Ubuntu Server while connecting via ssh. When I login, I get this error. Could not chdir to home directory /home/username: Input/output error It seems like my home folder is corrupt or something. I cannot ls in the home folder directory, and in my usename directory, I can't cd into this. As root I cannot ls in the home directory as well or in any directory in Home. I notice as well when I save in vim or quit, it get this error at the bottom of the page E138: Cannot write viminfo file /home/root/.viminfo! Any ideas? EDIT: this is what happens if I type in these commands mount proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) /dev/mapper/RAID1-lvvar on /var type xfs (rw) /dev/mapper/RAID5-lvsrv on /srv type xfs (rw) /dev/mapper/RAID5-lvhome on /home type xfs (rw) /dev/mapper/RAID1-lvtmp on /tmp type reiserfs (rw) dmesg | tail [1213273.364040] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213274.084081] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213309.364038] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213310.084041] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213345.364039] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213346.084042] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213381.365036] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213382.084047] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213417.364039] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213418.084063] Filesystem "dm-4": xfs_log_force: error 5 returned. fdisk -l /dev/sda Cannot open /dev/sda

    Read the article

  • copying an lvm partition to a smaller disk, and renaming volume groups.

    - by dlamblin
    I was trying to shrink a vmdk (VMWare disk image) file to be as small as possible, and found two recommendations. The first is to cat /dev/zero into the fs then delete it, and run VMWare tools' shrink. This works okay. The second is to copy everything into a new vmdk. I went the second route. I did not use dd because I actaully want to use as few blocks as possible, instead of having a block-by-clok copy. Any unlinked files will still have blocks that aren't zeroed out. Secondly the centos image was mostly lvm, except for the boot partition, and my target was going to be 4gb instead of 8gb. I did use dd for the first 40mb to get the boot blocks and partition copied. I then used parted to create an identical primary boot, and smaller primary lvm. Then I used pvcreate on that device sdb2, vgcreate, and lvcreate to create a root and swap. I used mkfs.ext3fs on the root partition and then rsync -av / /2root excluding /proc /sys /2root /dev. So far everything went fine. My problem is that: The result is 2.7 GB while the source was 2.1 GB. This is weird to me. The second vgroup is called VolGroup01, while the original was called VolGroup00. How can I rename the VolGroup01 to VolGroup00 and swap it out after all this?

    Read the article

  • Online resizing of kvm guest root filesystem?

    - by Bittrance
    I have a Linux guest that uses an LVM volume directly as root file system (that is, there is no partition table). libvirt config looks thus: <os> <type arch='x86_64' machine='rhel6.4.0'>hvm</type> <kernel>/boot/vmlinuz-X.Y.Z.el6.x86_64</kernel> <initrd>/boot/initramfs-X.Y.Z.el6.x86_64.img</initrd> <cmdline>console=ttyS0 root=/dev/vda</cmdline> <boot dev='hd'/> </os> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' io='native'/> <source dev='/dev/vg/guest'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> From inside the guest: $ mount /dev/vda on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) Is it possible to resize the guest's root partition without rebooting the guest? Just doing lvextend on the host and resize2fs from the guest does not seem to be enough.

    Read the article

  • Deciphering an IIS6 Httperr log file

    - by smackaysmith
    We have a Windows 2003 R2 SP2 server with iis6 that is creating a 1024kb httperr file every minute. I can't figure out what I'm looking at. Here's a snippet: 2010-03-24 13:15:05 10.53.2.35 1667 10.53.2.12 80 HTTP/1.1 PUT /hserver.dll?&V01|&IMAC=0080646077AB|CID=32|CN=LWT0080646077AB|ED=1|IP=10.53.2.35|SM=255.255.255.0|GW=10.53.2.1|SN=10.53.2.255|DM=logs.com|1D=10.53.2.12|2D=10.101.2.12|0D=1|AL=/usr/sbin/netxserv|AV=4.1.0.0|CP=VIAüEstherüprocessorüü800MHz|CPS=800|RM=190512|B1=1.18|PD2=1024x768x16ü@ü60Hz|IM=6.6.2-02|CI=3600|SN#=6KHDG301300|OS=23|VI=1|P1=24|TZO=-301|TZ=CDT|FS=128|MD=2003-04|CO=|LO=|AP0=BaseüSystem|NA|6.6.2-02|AP1=RapportüAgent|NA|4.1.0-3.26|AP2=TrueType|NA|6.8.0-3.4|AP3=WebFonts|NA|2.0.4-3.6|AP4=TrueTypeüFonts|NA|6.8.0-3.5|AP5=Network_login|NA|1.0.0-1.0.3|AP6=ScreenüSaver|NA|3.13|AP7=DMonitor|NA|1.0.0-0.4.0|AP8=MozillaüFirefox_15|NA|1.5.0.8-3.6|AP9=RemoteüShadow|NA|3.17|AP10=RemoteüDesktop|NA|1.6.0-1.0|AP11=SNMP|NA|5.1.3.1-3.13|AP12=LinuxüPrinting|NA|3.8.27-3.33|AP13=SSH|NA|3.8.1-3.25|AP14=ThinPrint|NA|6.2.87-0.2|AP15=XDMCP|NA|6.8.0-3.29|AP16=Ericom|NA|8.2.0-3.29|AP17=Daylightüsavingütimeüupdate|NA|1.1.0-1.0.0| 411 - LengthRequired - What on earth am I looking at? Nothing in the system or app logs. Finally, in iis manager, Default Web Site label has boxes instead of spaces. Very odd.

    Read the article

  • File corrupted by some tools (probably virus or antivirus)- does the pattern indicate any known corruptions?

    - by StackTrace
    As part of our software we install postgres(windows). In one of the customer sites, a set of files got corrupted. All files were part of timezone information(postgres/share/timezone). They are some sort of binary files. After the corruption, they all starts with following pattern od -tac output $ od -tac GMT 0000000 can esc etx sub nak dle em | nl em so | o r l _ 030 033 003 032 025 020 031 | \n 031 016 | o r l _ 0000020 \ \ \ \ \ \ \ del 3 fs ] del del del del del \ \ \ \ \ \ \ 377 3 034 ] 377 377 377 377 377 0000040 > ack r v s ack p soh q h r s q w h q 276 206 362 366 363 206 360 201 361 350 362 363 361 367 350 361 0000060 t r ack h eot s } v h | etx p eot ack nul } 364 362 206 350 204 363 375 366 350 374 203 360 204 206 200 375 0000100 | q t s t 8 E E E E E E E E E E 374 361 364 363 364 270 305 305 305 305 305 305 305 305 305 305 0000120 E E E E E E E E E E E E E E E E 305 305 305 305 305 305 305 305 305 305 305 305 305 305 305 305 * 0000240 m ; z dc3 7 sub c can em a u 5 can d 2 B 355 ; z 023 267 232 343 230 031 a u 5 230 d 262 302 0000260 X nul y J o S - 9 ] stx soh L can 1 ! j 330 \0 y 312 o S 255 9 335 202 001 314 030 261 241 j 0000300 dle g o etb n ff em ] 9 F ' dc4 } , em $ 020 g 357 227 n \f 231 ] 271 F 247 024 375 254 231 244 0000320 Q si ff L bs 2 # stx i 5 r % | | c del Q 017 214 314 210 2 # 002 351 5 362 245 374 374 343 177 0000340 m C esc H em enq ~ X o V p / l dc3 N sp m C 033 H 031 205 376 X o 326 360 257 l 023 N 0000360 } ) enq ( syn ! 3 s $ E z dc3 A dc3 ff P

    Read the article

  • Problems mounting HPUX LVM+VXFS filesystem on Linux

    - by golimar
    I have a physical disk from a HPUX system that I need to access from a Debian Linux for ia64 system. From the hpux-lvm-tools project I have the tools to access the HPUX LVMs (Linux LVM has a different format) and I also have the freevxfs driver. I know beforehand that the disk has three partitions, and that the biggest one contains LVM volumes, and some of those are VxFS filesystems. I can see the partitions: # cat /proc/partitions major minor #blocks name 8 32 143374744 sdc 8 33 512000 sdc1 8 34 142452736 sdc2 8 35 409600 sdc3 It finds a VG in one of the disk partitions: # ./vgscan_hpux On /dev/sdc2 - vg1328874723 # ./pvdisplay_hpux /dev/sdc2 PV General Information ---------------------- VG Creation Time Fri Feb 10 12:52:03 2012 Physical Volume ID 1766760336 1328874723 Volume Group ID 1766760336 1328874723 Physical Volumes in VG 1766760336 1328874723 VG Actication Mode 0 - LOCAL PE Size 64 MBs Lvol sizes ---------- lvol1 - 8 Extents - 512 MBs lvol2 - 192 Extents - 12288 MBs lvol3 - 16 Extents - 1024 MBs ... lvol21 - 13 Extents - 832 MBs lvol22 - 224 Extents - 14336 MBs lvol23 - 16 Extents - 1024 MBs Then I activate that VG and some new devices appear in my system: # ./pvactivate_hpux /dev/sdc2 VG vg1328874723 Activated succesfully with 23 lvols. # # ll /dev/mapper/ total 0 crw------- 1 root root 10, 59 Nov 26 16:08 control lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol1 -> ../dm-0 lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol10 -> ../dm-9 ... lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol8 -> ../dm-7 lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol9 -> ../dm-8 But: # mount /dev/mapper/vg1328874723-lvol18 /mnt/tmp mount: you must specify the filesystem type # mount -t vxfs /dev/mapper/vg1328874723-lvol18 /mnt/tmp mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg1328874723-lvol18, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # lsmod |grep vxfs freevxfs 23905 0 I also tried to identify the raw data with the file command and it just says 'data': # file -s /dev/mapper/vg1328874723-lvol18 /dev/mapper/vg1328874723-lvol18: symbolic link to `../dm-17' # file -s /dev/dm-17 /dev/dm-17: data # Any clues?

    Read the article

  • Disable XP disk check using FAT32

    - by mike xie
    Right now I'm using Windows XP and Macintosh on my MacBook Pro via Bootcamp. Sometimes my XP would crash and when I restarted it it would have to go through disk check, although it says I can skip it by pushing a key, but this never worked for me. I did a bit of research online on how to disable disk check and found chkntfs /x c: but when I tried this out in my cmd it said the disk is FAT32 format. I tried to convert my C: drive from FAT32 to NTFS by using convert c: /FS:NTFS but when I tried this it told me to locate my C: drive. I tried to type C: and Bootcamp but couldn't really get past it. I later saw someone said to use this: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager] "AutoChkTimeOut"=dword:0000000 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager] "BootExecute"=hex(7):61,00,75,00,74,00,6f,00,63,00,68,00,65,00,63,00,6b,00,20,\ 00,61,00,75,00,74,00,6f,00,63,00,68,00,6b,00,20,00,2a,00,00,00,00,00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon] "SFCScan"=dword:00000000 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\cleanuppath] @=hex(2):25,00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,74,00,25,\ 00,5c,00,73,00,79,00,73,00,74,00,65,00,6d,00,33,00,32,00,5c,00,63,00,6c,00,\ 65,00,61,00,6e,00,6d,00,67,00,72,00,2e,00,65,00,78,00,65,00,20,00,2f,00,44,\ 00,20,00,25,00,63,00,00,00 (Save it as .reg and execute it) I have just tried running it but am not really sure if it did anything (my laptop hasn't crashed yet :) ) Firstly, I am wondering if someone can tell me how to check if that script worked? Secondly, if that script didn't work, does anyone have any solution for these problems? Is there another way to disable disk check or is there another way for me to change my FAT32 to NTFS?

    Read the article

  • Millions of files in php's tmp error - how to delete?

    - by Jonatan Littke
    Hey. I've got a tmp-folder with 14 million php session files in my home directory. At least that's what I think it is, it's not like I could ls it or anything. How can I empty this folder? I've tried using find with the -exec rm {} \; commands but that didn't work. ls 'sess_0*' | xargs rm did neither. I'm currently running rm -rf tmp but after two hours the folder appears to be the same size. REFERENCE INFO: I suddenly encountered an error where SESSIONS could no longer be written to disk: [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: open(/var/www/clients/client1/web1/tmp/sess_8e12742b62aa68a3f9476ec80222bbfb, O_RDWR) failed: No space left on device (28) in Unknown on line 0 [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/www/clients/client1/web1/tmp) in Unknown on line 0 I ran: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 457G 126G 308G 29% / tmpfs 1.8G 0 1.8G 0% /lib/init/rw udev 10M 664K 9.4M 7% /dev tmpfs 1.8G 0 1.8G 0% /dev/shm But as you can see, the disk isn't full. So I had a look in the syslog which says the following 20 times per second: kernel: [19570794.361241] EXT3-fs warning (device md0): ext3_dx_add_entry: Directory index full! This led me thinking to a full folder, obviously, but since my web folder only has 60k files (having counted them), I guessed it was the tmp folder (the local one, for this instance of php) that messed things up. Some commands I ran: $ sudo ls sess_a* | xargs rm -f bash: /usr/bin/sudo: Argument list too long find . -exec rm {} \; rm: cannot remove directory '.' find: cannot fork: Cannot allocate memory I'm running Debian Lenny, php5, ISPConfig, SuEXEC and Fast-CGI.

    Read the article

  • Setfacl configuration issue in Linux

    - by Balualways
    I am configuring a Linux Server with ACL[Access Control Lists]. It is not allowing me to perform setfacl operation on one of the directoriy /xfiles. I am able to perform the setfacl on other directories as /tmp /op/applocal/. I am getting the error as : root@asifdl01devv # setfacl -m user:eqtrd:rw-,user:feedmgr:r--,user::---,group::r--,mask:rw-,other:--- /xfiles/change1/testfile setfacl: /xfiles/change1/testfile: Operation not supported I have defined my /etc/fstab as /dev/ROOTVG/rootlv / ext3 defaults 1 1 /dev/ROOTVG/varlv /var ext3 defaults 1 2 /dev/ROOTVG/optlv /opt ext3 defaults 1 2 /dev/ROOTVG/crashlv /var/crash ext3 defaults 1 2 /dev/ROOTVG/tmplv /tmp ext3 defaults 1 2 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/ROOTVG/swaplv swap swap defaults 0 0 /dev/APPVG/home /home ext3 defaults 1 2 /dev/APPVG/archives /archives ext3 defaults 1 2 /dev/APPVG/test /test ext3 defaults 1 2 /dev/APPVG/oracle /opt/oracle ext3 defaults 1 2 /dev/APPVG/ifeeds /xfiles ext3 defaults 1 2 I have a solaris server where the vfstab is defined as cat vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - /dev/vx/dsk/bootdg/swapvol - - swap - no - swap - /tmp tmpfs - yes size=1024m /dev/vx/dsk/bootdg/rootvol /dev/vx/rdsk/bootdg/rootvol / ufs 1 no logging /dev/vx/dsk/bootdg/var /dev/vx/rdsk/bootdg/var /var ufs 1 no logging /dev/vx/dsk/bootdg/home /dev/vx/rdsk/bootdg/home /home ufs 2 yes logging /dev/vx/dsk/APP/test /dev/vx/rdsk/APP/test /test vxfs 3 yes - /dev/vx/dsk/APP/archives /dev/vx/rdsk/APP/archives /archives vxfs 3 yes - /dev/vx/dsk/APP/oracle /dev/vx/rdsk/APP/oracle /opt/oracle vxfs 3 yes - /dev/vx/dsk/APP/xfiles /dev/vx/rdsk/APP/xfiles /xfiles vxfs 3 yes - I am not able to find out the issue. Any help would be appreciated.

    Read the article

  • How do I boot [embedded] linux from sd card?

    - by Brandon Yates
    I am hacking together a quick embedded linux system on a DM816x evm board. Previously I have been using TFTP and NFS to load my kernel and root filesystem to the board. I am now trying to switch over to loading everything from an SD card. I have my card partitioned such that uBoot and my kernel image are in one partition, and my rootFS in another partition. At power-on, Uboot starts correctly and successfully launches the kernel. However, the kernel is unable to mount the root file system. It appears that it doesn't recognize any SD (mmc) cards. It gives this error message. VFS: Cannot open root device "mmcblk0p2" or unknown-block(2,0) Please append a correct "root=" boot option; here are the available partitions: 1f00 256 mtdblock0 (driver?) 1f01 8 mtdblock1 (driver?) 1f02 2560 mtdblock2 (driver?) 1f03 1272 mtdblock3 (driver?) 1f04 2432 mtdblock4 (driver?) 1f05 128 mtdblock5 (driver?) 1f06 4352 mtdblock6 (driver?) 1f07 204928 mtdblock7 (driver?) 1f08 50304 mtdblock8 (driver?) Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(2,0) I feel like I'm missing something fundamental here. Why does it not recognize the root device I am trying to load from? Here is my uBoot boot script that is running: setenv bootargs console=ttyO2,115200n8 root=/dev/mmcblk0p2 rw mem=124M earlyprink vram=50M ti816xfb.vram=0:16M,1:16M,2:6M ip=off noinitrd;mmc init;fatload mmc 1 0x80009000 uImage;bootm 0x80009000

    Read the article

  • Is there a way to do something like LVM over NFS?

    - by warren
    I realize that since NFS is not block-level, LVM can't be used directly. However: is there a way to combine multiple NFS exports (from, say, 3 servers) into one mount point on a different server? Specifically, I'd like to be able to do this on RHEL 4 (or 5, and re-export the combined mount to my RHEL 4 server). expansion The reason I pegged lvm is that I want a bunch of exported mounts (servera:/mnt/export, serverb:/mnt/export, serverc:/mnt/export, etc) to all mount at /mnt/space so that my /mnt/space on this server (serverx) as one large filesystem. Yes, I know that re-exporting is generally a Bad Thing™ but thought it might work, if there was a way to accomplish this on a newer release as opposed to an older one From reading the unionfs docs, it appears that I can't use it over a remote connection - have I misread it? More accurately, since Union FS merges the contents of multiple branches, but makes them appear as one, it doesn't seem to go in reverse: I'm trying to mount a bunch of NFS points in a merged fashion, then write to them - not caring where data goes, a la LVM .

    Read the article

  • Why is my filesystem being mounted read-only in linux?

    - by Tim
    I am trying to set up a small linux system based on Gentoo on a VirtualBox machine, as a step towards deploying the same system onto a low-spec Single Board Computer. For some reason, my filesystem is being mounted read-only. In my /etc/fstab, I have: /dev/sda1 / ext3 defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 none /dev/shm tmpfs defaults 0 0 However, once booted /proc/mounts shows rootfs / rootfs rw 0 0 /dev/root / ext3 ro,relatime,errors=continue,barrier=0,data=writeback 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 udev /dev tmpfs rw,nosuid,relatime,size=10240k,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0 none /dev/shm tmpfs rw,relatime 0 0 usbfs /proc/bus/usb usbfs rw,nosuid,noexec,relatime,devgid=85,devmode=664 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 (the above may contain errors: there's no practical way to copy and paste) The partition at /dev/hda1 is clearly being mounted OK, since I can read all the data, but it's not being mounted as described in fstab. How might I go about diagnosing / resolving this? Edit: I can remount with mount -o remount,rw / and it works as expected, except that /proc/mounts reports /dev/root mounted at / rather than /dev/sda1 as I'd expect. If I try to remount with mount -a I get mount: none already mounted or /sys busy mount: according to mtab, sysfs is already mounted on /sys Edit 2: I resolved the problem with mount -a (the same error was occuring during startup, it turned out) by changing the sysfs and proc lines to proc /proc proc [...] sysfs /sys sysfs [...] Now mount -a doesn't complain, but it doesn't result in a read-write root partition. mount -o remount / does cause the root partition to be remounted, however.

    Read the article

  • BTrFS crashhhh?

    - by bumbling fool
    I create a new BTrFS raid10 file system using two 250GB drives and the second partition on a third 80GB drive. I create a subvol and snapshot. I mount the snapshot and start copying 8GB of data to it. It gets to around 1GB and the Desktop disappears and what looks like a non interactive terminal comes up with dump/crash information. I don't have a camera handy or I'd take a picture and post it. It basically looks like stack trace info. CTRL-ALT F7 will eventually bring back the Desktop though but the entire BTrFS portion of the OS is hung and non responsive until I reboot. I've reformated and reproduced this problem 3 times now and I'm about to give up :( I realize it is possible this problem is not entirely BTrFS' fault because I'm on natty which is still alpha. More granular details in case I'm an idiot: 1) Create FS: sudo mkfs.btrfs -m raid10 -d raid10 /dev/sda2 /dev/sdb /dev/sdc 2) Initial temporary mount: mkdir /btrfs && sudo mount -t btrfs /dev/sda2 /btrfs 3) Create subvol btrfs s c /btrfs/vm 4) Create initial snapshot: (optional) btrfs s sn /btrfs/cantremember.snap.something 5)unmount /btrfs and mount /btrfs/vm sudo mount -t btrfs -o subvol=vm /dev/sda2 /btrfs/vm 6) Copy data to subvolume. 7) Balance data across drives: (optional) btrfs f bal <path> (never get to this step 7...) Am I doing something wrong?

    Read the article

  • How can I recover XFS partitions from a formatted HD?

    - by giuprivite
    I deleted the partition table of my HD. I wanted to format another one, but by mistake, I formatted the wrong one. Then I also created some new partition on it. Now I would like, if possible, to recover my old data. The old configuration was this: A primary NTFS partition with Windows, and a secondary partition with four logical partitions: a swap and three XFS partitions (two for Ubuntu and OpenSuSE, and one with the home for both systems). This is the output I get when I run gpart in a terminal: ubuntu@ubuntu:~$ sudo gpart /dev/sdb Begin scan... Possible partition(Windows NT/W2K FS), size(39997mb), offset(0mb) Possible extended partition at offset(39997mb) Possible partition(Linux swap), size(8189mb), offset(39997mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(48187mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(89149mb) Possible partition(SGI XFS filesystem), size(175044mb), offset(130112mb) End scan. Checking partitions... Partition(OS/2 HPFS, NTFS, QNX or Advanced UNIX): primary Partition(Linux swap or Solaris/x86): logical Partition(Linux ext2 filesystem): logical Partition(Linux ext2 filesystem): orphaned logical Partition(Linux ext2 filesystem): orphaned logical Ok. Guessed primary partition table: Primary partition(1) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) size: 39997mb #s(81915360) s(63-81915422) chs: (0/1/1)-(1023/254/63)d (0/1/1)-(5098/254/51)r Primary partition(2) type: 015(0x0F)(Extended DOS, LBA) size: 265245mb #s(543221849) s(81915435-625137283) chs: (1023/254/63)-(1023/254/63)d (5099/0/1)-(38912/254/2)r Primary partition(3) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Primary partition(4) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Looking the first eight lines, it seems the data are still there... but I don't know how to recover them. I have a free second HD of about 500 GB (the formatted one is 320 GB) that I can use for the recovery process.

    Read the article

  • Make case-sensitive SMB share case-insensitive

    - by fungs
    I am running a legacy XP app that I would like to move on a network share. It is very simple and works in theory but the server providing the share is based on Linux (cannot configure) and the software does not work correctly because it is programmed case-insensitively, it seems. After some research, network shares behave like the filesystem they use underneath. This is normal. Unfortunately I cannot fix the software myself. Is there any way to turn the case-sensitivity into case-insensitivity for a Windows network drive on the client side? I fould two approaches: First, something like icasefile (http://wnd.katei.fi/icasefile/) that wraps around the program and intercepts the file I/O. This is for UNIX only. Secondly, a proxy virtual file system (e. g. something using Dokan). Unfortunately I couldn't find any suitable fs, the only possibility would be to put a case-insensitive filesystem on an image file and put this on the share using for example lmdisk (http://www.ltr-data.se/opencode.html/#ImDisk). Do you have any better ideas?

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >