Search Results

Search found 13584 results on 544 pages for 'digital root'.

Page 70/544 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Where should CentOS users get /usr/share/virtio-win/drivers for virt-v2v?

    - by Philip Durbin
    I need to migrate a number of virtual machines from VMware ESX to CentOS 6 KVM hypervisors. Ultimately, I wrote an RPM spec file that solved my problem at https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec but I'm not sure if there's another RPM in base CentOS or EPEL (something standard) I should be using instead. Originally, I was getting this "No root device found in this operating system image" error when attemting to migrate a Window 2008 VM. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm virt-v2v: No root device found in this operating system image. . . . but I solved this with a simply yum install libguestfs-winsupport since the docs say: If you attempt to convert a virtual machine using NTFS without the libguestfs-winsupport package installed, the conversion will fail. Next I got an error about missing drivers for Windows 2008. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: Installation failed because the following files referenced in the configuration file are required, but missing: /usr/share/virtio-win/drivers/amd64/Win2008 . . . and I resolved this by grabbing an iso from Fedora at http://alt.fedoraproject.org/pub/alt/virtio-win/latest/ as recommended by http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers and building an RPM from it with this spec file: https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec Now, virt-v2v exits without error: [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: my-vm configured with virtio drivers. [root@kvm01b ~]# Now, my question is, rather that the virtio-win RPM from the spec file I wrote, is there some other more standard RPM in base CentOS or EPEL that will resolve the error above? Here's a bit more detail about my setup: [root@kvm01b ~]# cat /etc/redhat-release CentOS release 6.2 (Final) [root@kvm01b ~]# rpm -q virt-v2v virt-v2v-0.8.3-5.el6.x86_64 See also Bug 605334 – VirtIO driver for windows does not show specific OS: Windows 7, Windows 2003

    Read the article

  • ghettoVCB issue

    - by romgo75
    I have setup a ghettoVCB script in order to backup three VM. I put it in a crontab but I have an issue. In my backup folder I have 3 different folders, one for each VM. In each folder I have the following files: -rw-r--r-- 1 root root 1263 Mar 17 01:51 vm1-2010-03-16--2.gz -rw-r--r-- 1 root root 1263 Mar 17 00:41 vm1-2010-03-16--3.gz -rw-r--r-- 1 root root 1261 Mar 18 01:22 vm1-2010-03-17--1.gz drwxr-xr-x 1 root root 980 Mar 19 23:39 vm1-2010-03-19 The problem is the last folder. It seems that a backup didn't finish the process. When I read the logs concerning this folder I get: 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/datastore1/backup/ 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2010-03-19 23:00:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick 2010-03-19 23:00:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-03-19 23:00:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-03-19 23:00:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-03-19 23:00:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-03-19 23:00:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-03-19 23:00:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-03-19 23:00:01 -- info: CONFIG - LOG_LEVEL = info 2010-03-19 23:00:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-03-19 23:00:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all http://... 2010-03-19 23:39:35 -- info: Initiate backup for vm1 2010-03-19 23:39:35 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-03-19" for vm1 Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1_1.vmdk'... ^MClone: 0% done.^MClone: 1% done.^MClone: 2% done.^MClone: 3% done.^MClone: 4% done.^MClone: 5% done.^MClone: 6% done.^MClone: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone Failed to clone disk : The file already exists (39). Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1.vmdk'... 2010-03-20 00:46:20 -- info: Removing snapshot from vm1 ... one: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone: 10% done.^MClone: 11% done.^MClone: 12% done.^MClone: 13% done.^MClone: 14% done.^MClone: 15% done.^MClone: 16% done.^MCl 2010-03-19 23:51:19 -- info: Removing snapshot from vm1 ... I can't run ghettoVCB anymore because the VM has a snapshot which has not been deleted. I know how to delete the snapshot, but I don't know why the VCB script is not able to handle rotation of the VM backups? Any ideas? Thanks!

    Read the article

  • ghettoVCB issue

    - by romgo75
    Hi, I setup ghettoVCB script in order to backup 3 VM. I put it in a crontab but I have an issue. In my backup folder I have 3 different folder, one for each VM. For each Folder I have th following files : -rw-r--r-- 1 root root 1263 Mar 17 01:51 vm1-2010-03-16--2.gz -rw-r--r-- 1 root root 1263 Mar 17 00:41 vm1-2010-03-16--3.gz -rw-r--r-- 1 root root 1261 Mar 18 01:22 vm1-2010-03-17--1.gz drwxr-xr-x 1 root root 980 Mar 19 23:39 vm1-2010-03-19 The problem is the last folder. It seems that a backup didn't finished the process. When I read the logs concerned by this folder I get : 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/datastore1/backup/ 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2010-03-19 23:00:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick 2010-03-19 23:00:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-03-19 23:00:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-03-19 23:00:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-03-19 23:00:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-03-19 23:00:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-03-19 23:00:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-03-19 23:00:01 -- info: CONFIG - LOG_LEVEL = info 2010-03-19 23:00:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-03-19 23:00:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all http://... 2010-03-19 23:39:35 -- info: Initiate backup for vm1 2010-03-19 23:39:35 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-03-19" for vm1 Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1_1.vmdk'... ^MClone: 0% done.^MClone: 1% done.^MClone: 2% done.^MClone: 3% done.^MClone: 4% done.^MClone: 5% done.^MClone: 6% done.^MClone: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone Failed to clone disk : The file already exists (39). Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1.vmdk'... 2010-03-20 00:46:20 -- info: Removing snapshot from vm1 ... one: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone: 10% done.^MClone: 11% done.^MClone: 12% done.^MClone: 13% done.^MClone: 14% done.^MClone: 15% done.^MClone: 16% done.^MCl 2010-03-19 23:51:19 -- info: Removing snapshot from vm1 ... I can't run anymore ghetto VCB because the VM has a snapshot which has not been deleted. I know how to delete the snapshot, but I don't know why the VCB script is not able to handle vm abckup rotate ? Any idea ? Thanks !

    Read the article

  • SFTP ChRoot result in broken pipe

    - by Patrick Pruneau
    I have a website that I want to add some restricted access to a sub-folder. For this, I've decided to use CHROOT with SFTP (I mostly followed this link : http://shapeshed.com/chroot_sftp_users_on_ubuntu_intrepid/) For now, I've created a user (sio2104) and a group (magento).After following the guide, my folder list look like this : -rw-r--r-- 1 root root 27 2012-02-01 14:23 index.html -rw-r--r-- 1 root root 21 2012-02-01 14:24 info.php drwx------ 15 root root 4096 2012-02-25 00:31 magento As you can see, i've chown root:root the folder magento I wanted to jail-in the user and ...everything else by the way. Also in the magento folder, I chown sio2104:magento everything so they can access what they want. Finally, I've added this to sshd_config file : #Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp Match Group magento ChrootDirectory /usr/share/nginx/www/magento ForceCommand internal-sftp AllowTCPForwarding no X11Forwarding no PasswordAuthentication yes #UsePAM yes And the result is...well, I can enter my login, password and it's all finished with a "broken pipe" error. $ sftp [email protected] [....some debug....] [email protected]'s password: debug1: Authentication succeeded (password). Authenticated to 10.20.0.50 ([10.20.0.50]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Write failed: Broken pipe Connection closed Verbose mode gives nothing to help. Anyone have an idea of what I've done wrong? If I try to login with ssh or sftp with my personnal user, everything works fine.

    Read the article

  • Debootstrap Ubuntu over NFS leads to mknod I/O error

    - by Aaron B. Russell
    Hi everyone, I'm trying to prepare an Ubuntu environment for a diskless machine that will PXE boot and mount an NFS share as it's root. I've currently got another Ubuntu machine mounting the NFS share and I'm trying to debootstrap into it, but it has trouble creating devices over NFS: root@kimiko:~# mount | grep Seiuchi 192.168.0.203:/mnt/user/Seiuchi on /mnt type nfs (rw,addr=192.168.0.203) root@kimiko:~# debootstrap --arch i386 maverick /mnt http://gb.archive.ubuntu.com/ubuntu/ mknod: `/mnt/test-dev-null': Input/output error E: Cannot install into target '/mnt' mounted with noexec or nodev My NFS rule on the unRAID server is 192.168.0.201/32(rw,no_root_squash,sync). I don't have the noexec or nodev options set. I've not got much experience with NFS, so I'm probably missing something basic in the way I'm sharing this, but my attempts at Googling for an answer isn't really turning anything useful up. Does anyone have suggestions on what I might have missed or maybe relevant docs? Edit: Creating normal files (and directories) works just fine, I just can't create devices... root@kimiko:/mnt# mkdir foo root@kimiko:/mnt# cd foo root@kimiko:/mnt/foo# touch bar root@kimiko:/mnt/foo# mknod quux c 4 64 mknod: `quux': Input/output error root@kimiko:/mnt/foo# ls bar

    Read the article

  • Enable re-attached mouse/keyboard via ssh?!

    - by aidan
    I had Ubuntu 9.10 x64 Desktop installed on a nettop I have (that I normally run headless), and yesterday I decided to take the plunge and update to 10.04. So, I plugged in a screen and usb mouse/keyboard, booted up and set to work. It was 1am, and it was telling me it had 3hrs left to install all the new packages, so I unplugged the screen and usb mouse/keyboard, left the box running, and went to bed. This evening, I plugged it all back in again to check progress. It's asking if I want to remove obsolete packages. I do, but neither the mouse nor keyboard work! I can access the box via SSH like I normally do; is there any way I can re-enable the keyboard from there? I'm reluctant to restart the box (via ssh) mid-way through such a complicated upgrade. Thanks for any help! lsusb (with wireless mouse/keyboard receiver unplugged): Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub lsusb (with wireless mouse/keyboard receiver attached): Bus 004 Device 005: ID 045e:005f Microsoft Corp. Wireless MultiMedia Keyboard Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

  • Apache22 on FreeBSD - Starts, does not respond to requests

    - by NuclearDog
    Hey folks! I'm running Apache 2.2.17 with the peruser MPM on FreeBSD 8.2-RC1 on Amazon's EC2 (so it's XEN). It was installed from ports. My problem is that, although Apache is running, listening for, and accepting connections, it doesn't actually respond to any or show them in the log at all. If I telnet to the port it's listening on and type out an HTTP request: GET / HTTP/1.1 Host: asdfasdf And hit enter a couple of times, it just sits there... Nothing. No response requesting with a browser either. There doesn't appear to be anything helpful in the error log: [Sun Jan 09 16:56:24 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Sun Jan 09 16:56:25 2011] [notice] Digest: generating secret for digest authentication ... [Sun Jan 09 16:56:25 2011] [notice] Digest: done [Sun Jan 09 16:56:25 2011] [notice] Apache/2.2.17 (FreeBSD) mod_ssl/2.2.17 The access log stays empty: root:/var/log# wc httpd-access.log 0 0 0 httpd-access.log root:/var/log# I've tried with accf_http and accf_data both enabled and disabled, and with both the stock configuration and my customized config. I also tried uninstalling apache22-peruser-mpm and just installing straight apache22... Still no luck. I tried removing all of the LoadModule lines from httpd.conf and just re-enabled the ones that were necessary to parse the config. Ended up with only the following loaded: root:/usr/local/etc/apache22# /usr/local/sbin/apachectl -M Loaded Modules: core_module (static) mpm_peruser_module (static) http_module (static) so_module (static) authz_host_module (shared) log_config_module (shared) alias_module (shared) Syntax OK root:/usr/local/etc/apache22# Same results. Apache is definitely what's listening on port 80: root:/usr/local/etc/apache22# sockstat -4 | grep httpd root httpd 43789 3 tcp4 6 *:80 *:* root httpd 43789 4 tcp4 *:* *:* root:/usr/local/etc/apache22# And I know it's not a firewall issue as there is nothing running locally, and connecting from the local box to 127.0.0.1:80 results in the same issue. Does anyone have any idea what's going on? Why it would be doing this? I've exhausted all of my debugging expertise. :/ Thanks for any suggestions!

    Read the article

  • Sending email to google apps mailbox via exim4

    - by Andrey
    I have a hosting server with several users. One of the customers decided to move his email account to google apps and added the corresponding MX records so he can receive email now. But when it comes to sending email from my server to those email addresses, they don't make it. I guess it's because exim still thinks these domains are local. That's what i see in logs (example.com is my domain, example.net is the customer's domain): 2010-06-02 14:55:37 1OJmXp-0006yh-UG <= [email protected] U=root P=local S=342 T="lsdjf" from <[email protected]> for [email protected] 2010-06-02 14:55:38 1OJmXp-0006yh-UG ** [email protected] F=<[email protected]> R=virtual_aliases: 2010-06-02 14:55:38 1OJmXq-0006yl-2A <= <> R=1OJmXp-0006yh-UG U=mail P=local S=1113 T="Mail delivery failed: returning message to sender" from <> for [email protected] 2010-06-02 14:55:38 1OJmXp-0006yh-UG Completed 2010-06-02 14:55:38 1OJmXq-0006yl-2A User 0 set for local_delivery transport is on the never_users list 2010-06-02 14:55:38 1OJmXq-0006yl-2A == [email protected] R=localuser T=local_delivery defer (-29): User 0 set for local_delivery transport is on the never_users list 2010-06-02 14:55:38 1OJmXq-0006yl-2A ** [email protected]: retry timeout exceeded 2010-06-02 14:55:38 1OJmXq-0006yl-2A [email protected]: error ignored 2010-06-02 14:55:38 1OJmXq-0006yl-2A Completed What should i do to fix that?

    Read the article

  • Permissions nightmare - tried all I know

    - by Ben
    Working on a new client's dev site, which is a wordpress install on a Plesk box. I have SSH root access, and FTP access through a separate account. What I've done so far Initially I couldn't make any changes to any files at all. The permissions on all the template files looked a little screwy (644), so I figured change them to allow group, and add myself to the group: CHMOD Recursive on the theme folder to set everything to 664 Quickly realised I'd broken it, set the folders to 755, kept files as 664 Ownership on all files is a mixture of root:root and 500:500 (there is no user nor group with the ID of 500 on the server). Added myself to the group 'root' so I could modify the files too The Problem This worked OK, in terms of being able to edit the existing files, so I began working. However, I can't upload to the directory, even having run CHOWN -R root:root templatefolder/ and being in the root group. I feel like I must be missing something obvious, and it's doing my head in. Questions: Files in the install owned by 500 with group 500 - I've looked in /etc/group and /etc/passwd and there is no user nor group with this ID. Is that left over from another developer's setup or the previous server (they moved recently)? Is being in the 'root' group enough, or do I need to own the theme folder as 'myftpuser' in order to upload and create new files? Like I say, I have edit access, so I got myself this far. I'm now questioning what to do next!

    Read the article

  • htaccess: multiple redirections depending on domain name

    - by Marcin Kmiec
    I have a server and a few domains and two webpages. Can't figure out how to do the following: A.com -> root\ www.A.com -> root\ B.com -> root\ www.B.com -> root\ C.com -> root\folder1\ www.C.com -> root\folder1\ By the way. What is the 'and' logical operator used in htaccess? I found that 'or' is [OR] but [AND] doesn't seem to work. And what is the language htaccess is written in:)? UPDATE I made a mistake in the question though. Here's what I'd really want to do. DNS is set for the domain A.com to point to the root folder of the server. Now I would like to set the following redirections: Any domain other than C.com and other than D.com redirects (301) to www.A.com. A.com points to the root folder of the server anyway and that is set in DNS. Domain www.C.com points to the folder 'folder1' on the server. Can it be set in htaccess? Now domains C.com, www.D.com and D.com redirects to www.C.com.

    Read the article

  • Error when installing wubi on windows 7

    - by P'sao
    Im installing ubuntu on windows 7(wubi 11.10): when its nearly done it gives me this error in the log file: Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 461, in expand_diskimage File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Users\P'sao\AppData\Local\Temp\pyl10D2.tmp\bin\resize2fs.exe -f C:\ubuntu\disks\root.disk 17744M >>retval=1 >>stderr= >>stdout=resize2fs 1.40.6 (09-Feb-2008) Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] 10-25 20:31 DEBUG TaskList: # Cancelling tasklist 10-25 20:31 DEBUG TaskList: # Finished tasklist 10-25 20:31 ERROR root: Error executing command >>command=C:\Users\P'sao\AppData\Local\Temp\pyl10D2.tmp\bin\resize2fs.exe -f C:\ubuntu\disks\root.disk 17744M >>retval=1 >>stderr= >>stdout=resize2fs 1.40.6 (09-Feb-2008) Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 461, in expand_diskimage File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Users\P'sao\AppData\Local\Temp\pyl10D2.tmp\bin\resize2fs.exe -f C:\ubuntu\disks\root.disk 17744M >>retval=1 >>stderr= >>stdout=resize2fs 1.40.6 (09-Feb-2008) Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] can some one help me?

    Read the article

  • How NumLock is used in Ubuntu?

    - by ???
    I found that, when the NumLock is on, then many key combination won't work. For example, generally Ctrl-A is used to select all, but it won't work when NumLock is on. There are two keyboard: The laptop one (Thinkpad T61), and an external USB keyboard. The logs shown in xev: (no log when pressed Fn+NumLock on the laptop keyboard) Logs when pressed the NumLock on the USB keyboard: (Switch On) KeyPress event, serial 40, synthetic NO, window 0xb600001, root 0xac, subw 0x0, time 22187595, (102,107), root:(1198,133), state 0x10, keycode 77 (keysym 0xff7f, Num_Lock), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False PropertyNotify event, serial 40, synthetic NO, window 0xb600001, atom 0x1b8 (XKLAVIER_STATE), time 22187601, state PropertyNewValue KeyRelease event, serial 40, synthetic NO, window 0xb600001, root 0xac, subw 0x0, time 22187723, (102,107), root:(1198,133), state 0x10, keycode 77 (keysym 0xff7f, Num_Lock), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False (Switch Off) KeyPress event, serial 40, synthetic NO, window 0xb600001, root 0xac, subw 0x0, time 22187899, (102,107), root:(1198,133), state 0x0, keycode 77 (keysym 0xff7f, Num_Lock), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False PropertyNotify event, serial 40, synthetic NO, window 0xb600001, atom 0x1b8 (XKLAVIER_STATE), time 22187904, state PropertyNewValue KeyRelease event, serial 40, synthetic NO, window 0xb600001, root 0xac, subw 0x0, time 22188003, (102,107), root:(1198,133), state 0x10, keycode 77 (keysym 0xff7f, Num_Lock), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False

    Read the article

  • ActiveMQ Configuration with KahaDB

    - by xeraa
    We are using ActiveMQ 5.6.0 with KahaDB. It has produced quite some log files, which is to be expected with our infrastructure, looking like this: $ ll -h /opt/activemq/data/kahadb/ total 969M drwxr-xr-x 2 root root 4.0K Nov 3 12:47 ./ drwxr-xr-x 3 activemq activemq 4.0K Sep 24 12:12 ../ -rw-r--r-- 1 root root 39M Oct 16 07:57 db-202.log -rw-r--r-- 1 root root 38M Oct 16 07:57 db-203.log -rw-r--r-- 1 root root 33M Oct 17 08:12 db-238.log ... No more messages were processed, when we ran into the 1GB temp usage limit. Or that's what we are assuming, is that correct? The configuration looks like this: <systemUsage> <systemUsage> <memoryUsage> <memoryUsage limit="512mb"/> </memoryUsage> <storeUsage> <storeUsage limit="3 gb"/> </storeUsage> <tempUsage> <tempUsage limit="1 gb"/> </tempUsage> </systemUsage> </systemUsage> After cleaning up the log files and being way below the limits, still no messages were consumed by AMQ. Only when we manually purged a route, messages were starting to be delivered again. So we need to ensure, that the KahaDB log size always stays below the temp usage, right? And that delivery was not picked up after fixing that is a bug or are there any other steps to be taken?

    Read the article

  • Samba/Winbind issues joing to Active directory domain

    - by Frap
    I'm currently in the process of setting up winbind/samba and getting a few issues. I can test connectivity with wbinfo fine: [root@buildmirror ~]# wbinfo -u hostname username administrator guest krbtgt username [root@buildmirror ~]# wbinfo -a username%password plaintext password authentication succeeded challenge/response password authentication succeeded however when I do a getent I don't get any AD accounts returned [root@buildmirror ~]# getent passwd root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin daemon:x:2:2:daemon:/sbin:/sbin/nologin adm:x:3:4:adm:/var/adm:/sbin/nologin lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt mail:x:8:12:mail:/var/spool/mail:/sbin/nologin uucp:x:10:14:uucp:/var/spool/uucp:/sbin/nologin operator:x:11:0:operator:/root:/sbin/nologin puppet:x:52:52:Puppet:/var/lib/puppet:/sbin/nologin my nsswitch looks like this: passwd: files winbind shadow: files winbind group: files winbind #hosts: db files nisplus nis dns hosts: files dns and I'm definitely joined to the domain: [root@buildmirror ~]# net ads info LDAP server: 192.168.4.4 LDAP server name: pdc.domain.local Realm: domain.local Bind Path: dc=DOMAIN,dc=LOCAL LDAP port: 389 Server time: Sun, 05 Aug 2012 17:11:27 BST KDC server: 192.168.4.4 Server time offset: -1 So what am I missing?

    Read the article

  • Very high CPU and low RAM usage - is it possible to place some of swap some of the CPU usage to the RAM (with CloudLinux LVE Manager installed)?

    - by Chriswede
    I had to install CloudLinux so that I could somewhat controle the CPU ussage and more importantly the Concurrent-Connections the Websites use. But as you can see the Server load is way to high and thats why some sites take up to 10 sec. to load! Server load 22.46 (8 CPUs) (!) Memory Used 36.32% (2,959,188 of 8,146,632) (ok) Swap Used 0.01% (132 of 2,104,504) (ok) Server: 8 x Intel(R) Xeon(R) CPU E31230 @ 3.20GHz Memory: 8143680k/9437184k available (2621k kernel code, 234872k reserved, 1403k data, 244k init) Linux Yesterday: Total of 214,514 Page-views (Awstat) Now my question: Can I shift some of the CPU usage to the RAM? Or what else could I do to make the sites run faster (websites are dynamic - so SQL heavy) Thanks top - 06:10:14 up 29 days, 20:37, 1 user, load average: 11.16, 13.19, 12.81 Tasks: 526 total, 1 running, 524 sleeping, 0 stopped, 1 zombie Cpu(s): 42.9%us, 21.4%sy, 0.0%ni, 33.7%id, 1.9%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8146632k total, 7427632k used, 719000k free, 131020k buffers Swap: 2104504k total, 132k used, 2104372k free, 4506644k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 318421 mysql 15 0 1315m 754m 4964 S 474.9 9.5 95300:17 mysqld 6928 root 10 -5 0 0 0 S 2.0 0.0 90:42.85 kondemand/3 476047 headus 17 0 172m 19m 10m S 1.7 0.2 0:00.05 php 476055 headus 18 0 172m 18m 9.9m S 1.7 0.2 0:00.05 php 476056 headus 15 0 172m 19m 10m S 1.7 0.2 0:00.05 php 476061 headus 18 0 172m 19m 10m S 1.7 0.2 0:00.05 php 6930 root 10 -5 0 0 0 S 1.3 0.0 161:48.12 kondemand/5 6931 root 10 -5 0 0 0 S 1.3 0.0 193:11.74 kondemand/6 476049 headus 17 0 172m 19m 10m S 1.3 0.2 0:00.04 php 476050 headus 15 0 172m 18m 9.9m S 1.3 0.2 0:00.04 php 476057 headus 17 0 172m 18m 9.9m S 1.3 0.2 0:00.04 php 6926 root 10 -5 0 0 0 S 1.0 0.0 90:13.88 kondemand/1 6932 root 10 -5 0 0 0 S 1.0 0.0 247:47.50 kondemand/7 476064 worldof 18 0 172m 19m 10m S 1.0 0.2 0:00.03 php 6927 root 10 -5 0 0 0 S 0.7 0.0 93:52.80 kondemand/2 6929 root 10 -5 0 0 0 S 0.3 0.0 161:54.38 kondemand/4 8459 root 15 0 103m 5576 1268 S 0.3 0.1 54:45.39 lvest

    Read the article

  • Why am I getting a " instance has no attribute '__getitem__' " error?

    - by Kevin Yusko
    Here's the code: class BinaryTree: def __init__(self,rootObj): self.key = rootObj self.left = None self.right = None root = [self.key, self.left, self.right] def getRootVal(root): return root[0] def setRootVal(newVal): root[0] = newVal def getLeftChild(root): return root[1] def getRightChild(root): return root[2] def insertLeft(self,newNode): if self.left == None: self.left = BinaryTree(newNode) else: t = BinaryTree(newNode) t.left = self.left self.left = t def insertRight(self,newNode): if self.right == None: self.right = BinaryTree(newNode) else: t = BinaryTree(newNode) t.right = self.right self.right = t def buildParseTree(fpexp): fplist = fpexp.split() pStack = Stack() eTree = BinaryTree('') pStack.push(eTree) currentTree = eTree for i in fplist: if i == '(': currentTree.insertLeft('') pStack.push(currentTree) currentTree = currentTree.getLeftChild() elif i not in '+-*/)': currentTree.setRootVal(eval(i)) parent = pStack.pop() currentTree = parent elif i in '+-*/': currentTree.setRootVal(i) currentTree.insertRight('') pStack.push(currentTree) currentTree = currentTree.getRightChild() elif i == ')': currentTree = pStack.pop() else: print "error: I don't recognize " + i return eTree def postorder(tree): if tree != None: postorder(tree.getLeftChild()) postorder(tree.getRightChild()) print tree.getRootVal() def preorder(self): print self.key if self.left: self.left.preorder() if self.right: self.right.preorder() def inorder(tree): if tree != None: inorder(tree.getLeftChild()) print tree.getRootVal() inorder(tree.getRightChild()) class Stack: def __init__(self): self.items = [] def isEmpty(self): return self.items == [] def push(self, item): self.items.append(item) def pop(self): return self.items.pop() def peek(self): return self.items[len(self.items)-1] def size(self): return len(self.items) def main(): parseData = raw_input( "Please enter the problem you wished parsed.(NOTE: problem must have parenthesis to seperate each binary grouping and must be spaced out.) " ) tree = buildParseTree(parseData) print( "The post order is: ", + postorder(tree)) print( "The post order is: ", + postorder(tree)) print( "The post order is: ", + preorder(tree)) print( "The post order is: ", + inorder(tree)) main() And here is the error: Please enter the problem you wished parsed.(NOTE: problem must have parenthesis to seperate each binary grouping and must be spaced out.) ( 1 + 2 ) Traceback (most recent call last): File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 108, in main() File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 102, in main tree = buildParseTree(parseData) File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 46, in buildParseTree currentTree = currentTree.getLeftChild() File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 15, in getLeftChild return root[1] AttributeError: BinaryTree instance has no attribute '__getitem__'

    Read the article

  • Parsing data with Clojure, interval problem.

    - by Andrea Di Persio
    Hello! I'm writing a little parser in clojure for learning purpose. basically is a TSV file parser that need to be put in a database, but I added a complication. The complication itself is that in the same file there are more intervals. The file look like this: ###andreadipersio 2010-03-19 16:10:00### USER COMM PID PPID %CPU %MEM TIME root launchd 1 0 0.0 0.0 2:46.97 root DirectoryService 11 1 0.0 0.2 0:34.59 root notifyd 12 1 0.0 0.0 0:20.83 root diskarbitrationd 13 1 0.0 0.0 0:02.84` .... ###andreadipersio 2010-03-19 16:20:00### USER COMM PID PPID %CPU %MEM TIME root launchd 1 0 0.0 0.0 2:46.97 root DirectoryService 11 1 0.0 0.2 0:34.59 root notifyd 12 1 0.0 0.0 0:20.83 root diskarbitrationd 13 1 0.0 0.0 0:02.84 I ended up with this code: (defn is-header? "Return true if a line is header" [line] (> (count (re-find #"^\#{3}" line)) 0)) (defn extract-fields "Return regex matches" [line pattern] (rest (re-find pattern line))) (defn process-lines [lines] (map process-line lines)) (defn process-line [line] (if (is-header? line) (extract-fields line header-pattern)) (extract-fields line data-pattern)) My idea is that in 'process-line' interval need to be merged with data so I have something like this: ('andreadipersio', '2010-03-19', '16:10:00', 'root', 'launchd', 1, 0, 0.0, 0.0, '2:46.97') for every row till the next interval, but I can't figure how to make this happen. I tried with something like this: (def process-line [line] (if is-header? line) (def header-data (extract-fields line header-pattern))) (cons header-data (extract-fields line data-pattern))) But this doesn't work as excepted. Any hints? Thanks!

    Read the article

  • As3 printing problem, blanks swf after print or cancel

    - by Carlos Barbosa
    Hey all! ok back at another issues in as3 printing Code: //Function to print entire screen function printFunction(event:MouseEvent):void { var myPrintJob:PrintJob = new PrintJob(); var oldScaleX:Number = root.scaleX; var oldScaleY:Number = root.scaleY; //Start the print job myPrintJob.start(); //Figure out the new scale var newScaleX:Number = myPrintJob.paperWidth/root.width; var newScaleY:Number = myPrintJob.paperHeight/root.height; //Shrink in both the X and Y directions by the same amount (keep the same ratio) if(newScaleX < newScaleY) newScaleY = newScaleX; else newScaleX = newScaleY; root.scaleX = newScaleX; root.scaleY = newScaleY; //Print the page myPrintJob.addPage(Sprite(root)); myPrintJob.send(); //Reset the scale to the old values root.scaleX = oldScaleX; root.scaleY = oldScaleY; } I cant seem to find anything thats really helpful with this. When i click cancel on the print dialog box, i get error below and it blanks out my swf.

    Read the article

  • How best to organize projects folders for unit tests in .NET?

    - by Dan Bailiff
    So I'm trying to introduce unit testing to my group. I've successfully upgraded a VS'05 web site project to a VS'08 web application, and now have a solution with the web app project and a unit test project. The issue now is how to fit this back into the source repository such that we don't break the build system and the unit test projects are persisted as well. Right now we have something like this: c:\root c:\root\projectA c:\root\projectB c:\root\projectC where projectA contains the sln file and all other related files/folders for the project. Now I have this new solution that looks like this: c:\root\projectA (parent folder) c:\root\projectA\projectA (the production code project) c:\root\projectA\projectA_Test (the unit test project) c:\root\projectA\TestResults c:\root\projecta\projectA.sln How do I integrate this new structure back into the code repository? I'd really prefer to keep the production code folder where it was in the source repository for the sake of the build, but is this necessary? If I keep the production code project in its usual place then where do I keep my unit test projects and how do I connect them with a sln file? Is it better to use this new structure and adjust the build process? I'd love to hear how other people are dealing with this issue of upgrading legacy projects to unit testing.

    Read the article

  • A question on getting number of nodes in a Binary Tree

    - by Robert
    Dear all, I have written up two functions (pseudo code) for calculation the number of nodes and the tree height of a Binary Tree,given the root of the tree. Most importantly,the Binary Tree is represented as the First chiled/next sibling format. so struct TreeNode { Object element; TreeNode *firstChild; TreeNode *nextSibling; } Calculate the # of nodes: public int countNode(TreeNode root) { int count=0; while(root!=null) { root= root.firstChild; count++; } return count; } public int countHeight(TreeNode root) { int height=0; while(root!=null) { root= root.nextSibling; height++; } return height; } This is one of the problem I saw on an algorithm book,and my solution above seems to have some problems,also I didn't quite get the points of using this First Child/right sibling representation of Binary Tree,could you guys give me some idea and feedback,please? Cheers!

    Read the article

  • Eclipse Check for Updates issue

    - by Nicholas Ryan Bowers
    I install Eclipse from the Software Center so it links up and will be updated with the rest of my software. Because I am developing for Android, however, I have to install the ADT Plugin within Eclipse by going to Help Install new software (or something to that effect). Now, I do understand that I can update Eclipse through the actual Ubuntu software center/system, but in order to update plugins and extensions within Eclipse, I have to go to Help Check for Updates (which then scans all plugins for updates). The only issue, is that when I installed through the software center, the owner became root, and whenever I run it without root, I'm not able to update - I get the error message "Insufficient access privileges to apply this update." When I run it as root, all of my plugins disappear, because I guess I installed them as myself, not as root. I tried to install the plugins as root, but the Install New Software choice would not work. Ubuntu 12.04 and Eclipse 3.7.2-1

    Read the article

  • Yet another (13)Permission denied error on Apache2 server

    - by lollercoaster
    I just can't figure it out. I'm running apache2 on a Ubuntu 10.04 i386 server. Whenever I visit my server (has an IP address, and is connected to internet with static IP xxx.xxx.xxx.xxx) so that's not the problem) in browser, mysub.domain.edu (renamed here), I get the following: Forbidden You don't have permission to access /index.html on this server The apache2 error log confirms this: [Mon Apr 18 02:38:20 2011] [error] [client zzz.zzz.zzz.zzz] (13)Permission denied: access to / denied I'll try to provide all necessary information below: 1) Contents of /etc/apache2/httpd.conf DirectoryIndex index.html index.php 2) Contents of /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /home/myusername/htdocs <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/home/myusername/htdocs/"> Options Indexes FollowSymLinks MultiViews AllowOverride None order allow,deny allow from all DirectoryIndex index.html index.php Satisfy any </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> ServerName mysub.domain.edu </VirtualHost> 3) Contents of /etc/apache2/sites-enabled/000-default <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /home/myusername/htdocs <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/home/myusername/htdocs/"> Options Indexes FollowSymLinks MultiViews AllowOverride None order allow,deny allow from all DirectoryIndex index.html index.php Satisfy any </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> ServerName mysub.domain.edu </VirtualHost> 4) Result of ls -l (when I'm using sudo -i to be root): root@myserver:/home/myusername# ls -l total 4 drwxr-xr-x 2 www-data root 4096 2011-04-18 03:04 htdocs 5) ps auxwww | grep -i apache root@myserver:/home# ps auxwww | grep -i apache root 15121 0.0 0.4 5408 2544 ? Ss 16:55 0:00 /usr/sbin/apache2 -k start www-data 15122 0.0 0.3 5180 1760 ? S 16:55 0:00 /usr/sbin/apache2 -k start www-data 15123 0.0 0.5 227020 2788 ? Sl 16:55 0:00 /usr/sbin/apache2 -k start www-data 15124 0.0 0.5 227020 2864 ? Sl 16:55 0:00 /usr/sbin/apache2 -k start root 29133 0.0 0.1 3320 680 pts/0 R+ 16:58 0:00 grep --color=auto -i apache 6) ls -al /home/myusername/htdocs/ root@myserver:/# ls -al /home/myusername/htdocs/ total 20 drwxr-xr-x 2 www-data root 4096 2011-04-18 03:04 . drw-r--r-- 4 myusername myusername 4096 2011-04-18 02:13 .. -rw-r--r-- 1 root root 69 2011-04-18 02:14 index.html I'm not currently using any .htaccess files in my web root (htdocs) folder in my user folder. I don't know what is wrong, I've been trying to fix his for over 12 hours and I've gotten nowhere. If you have any suggestions, I'm all ears...

    Read the article

  • I tried to transplant the wireless_tools to Android4.0 system, but I have a problem with Java

    - by Chen Guoli
    My Linux system is Ubuntu Kylin, a new branch of Ubuntu spreading mainly in China. I have changed some files, such as wireless.22.h, ifrename.c and iwlib.h, in wireless_tools.29/ which is located in Android4.0 root directory. Then I followed these steps: $ cd ~/Android4.0 $ su $[key](change to root) root# source build/envsetup.sh root# cd ~/Android4.0/wireless_tools.2.9/ root# mm Then I got a message telling me that: Your version is: java version "1.7.0_21". The correct version is: Java SE 1.6. Then I did as How can I uninstall my current java and install sun java 1.6 The java was installed successfully, but when i tried mm again: Your version is: java version "1.6.0_27". The correct version is: Java SE 1.6. Then I tried https://source.android.com/source/initializing.html , but it didn't work. How can I install "java SE 1.6." correctly?

    Read the article

  • how to install flash for opera browser? ubuntu 12 04

    - by santosamaru
    for the opera:plugins setting its had been setup as enable to use the flash player .. and also i do trying to follow the instruction from I am testing the new Opera 11, but it keeps telling me I need to install flash player it still not help me .. what i have after following that link instructions is root@santos:/home/santos# cp /usr/lib/flashplugin-installer/libflashplayer.so ~/.opera/plugins/libflashplayer.so cp: cannot create regular file `/root/.opera/plugins/libflashplayer.so': No such file or directory root@santos:/home/santos# sudo apt-get gecko-mediaplayer E: Invalid operation gecko-mediaplayer root@santos:/home/santos# cp /usr/lib/flashplugin-installer/libflashplayer.so ~/.opera/plugins/libflashplayer.so cp: cannot create regular file `/root/.opera/plugins/libflashplayer.so': No such file or directory anyone can help me to solve this ?

    Read the article

  • NFS Mounts Issues

    - by user554005
    Having some issue with a NFS Setup on the clients it just times out refuses to connect [root@host9 ~]# mount 192.168.0.17:/home/export /mnt/export mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). Here are the settings I'm using: [root@host17 /home/export]# cat /etc/hosts.allow # # hosts.allow This file contains access rules which are used to # allow or deny connections to network services that # either use the tcp_wrappers library or that have been # started through a tcp_wrappers-enabled xinetd. # # See 'man 5 hosts_options' and 'man 5 hosts_access' # for information on rule syntax. # See 'man tcpd' for information on tcp_wrappers # portmap: 192.168.0.0/255.255.255.0 lockd: 192.168.0.0/255.255.255.0 rquotad: 192.168.0.0/255.255.255.0 mountd: 192.168.0.0/255.255.255.0 statd: 192.168.0.0/255.255.255.0 [root@host17 /home/export]# cat /etc/hosts.deny # # hosts.deny This file contains access rules which are used to # deny connections to network services that either use # the tcp_wrappers library or that have been # started through a tcp_wrappers-enabled xinetd. # # The rules in this file can also be set up in # /etc/hosts.allow with a 'deny' option instead. # # See 'man 5 hosts_options' and 'man 5 hosts_access' # for information on rule syntax. # See 'man tcpd' for information on tcp_wrappers # portmap:ALL lockd:ALL mountd:ALL rquotad:ALL statd:ALL [root@host17 /home/export]# cat /etc/exports /home/export 192.168.0.0/255.255.255.0(rw) [root@host17 /home/export]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere icmp any ACCEPT esp -- anywhere anywhere ACCEPT ah -- anywhere anywhere ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns ACCEPT udp -- anywhere anywhere udp dpt:ipp ACCEPT tcp -- anywhere anywhere tcp dpt:ipp ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:6379 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:sunrpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:sunrpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:nfs ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:32803 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:filenet-rpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:892 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:892 ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:rquotad ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:rquotad ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:pftp ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:pftp REJECT all -- anywhere anywhere reject-with icmp-host-prohibited on the clients here is some rpcinfos [root@host9 ~]# rpcinfo -p 192.168.0.17 program vers proto port 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100011 1 udp 875 rquotad 100011 2 udp 875 rquotad 100011 1 tcp 875 rquotad 100011 2 tcp 875 rquotad 100005 1 udp 45857 mountd 100005 1 tcp 55772 mountd 100005 2 udp 34021 mountd 100005 2 tcp 59542 mountd 100005 3 udp 60930 mountd 100005 3 tcp 53086 mountd 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 2 udp 2049 nfs_acl 100227 3 udp 2049 nfs_acl 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 2 tcp 2049 nfs_acl 100227 3 tcp 2049 nfs_acl 100021 1 udp 59832 nlockmgr 100021 3 udp 59832 nlockmgr 100021 4 udp 59832 nlockmgr 100021 1 tcp 36140 nlockmgr 100021 3 tcp 36140 nlockmgr 100021 4 tcp 36140 nlockmgr 100024 1 udp 46494 status 100024 1 tcp 49672 status [root@host9 ~]# [root@host9 ~]# rpcinfo -u 192.168.0.17 nfs rpcinfo: RPC: Timed out program 100003 version 0 is not available [root@host9 ~]# rpcinfo -u 192.168.0.17 portmap program 100000 version 2 ready and waiting program 100000 version 3 ready and waiting program 100000 version 4 ready and waiting [root@host9 ~]# rpcinfo -u 192.168.0.17 mount rpcinfo: RPC: Timed out program 100005 version 0 is not available [root@host9 ~]# I'm running CentOS 5.8 on all systems

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >