Search Results

Search found 1603 results on 65 pages for 'ls'.

Page 22/65 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Moving web files to /home/user/ gives permission denied using apache

    - by Maaz
    I recently created some linux users on my machine and their respective directories were created in the following manner /home/my_user so I decided to treat each user as one of my websites. I moved all my website files over to this directory like so /home/my_user/public_html/. I edited the virtual host in my httpd.conf and changed the root directory folder so this is how that looks <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/home/my_user/public_html" ServerName mywebsite.com ServerAlias www.mywebsite.com ErrorLog "/var/log/httpd/mywebsite/error_log" CustomLog "/var/log/httpd/mywebsite/access_log" common </VirtualHost> Now this virtual host configuration was working perfectly fine with my older document root path that was located at /var/www/html/mywebsite/public_html but after changing that to what it is right now, I am getting a permission denied error. But I followed the instructions here: http://stackoverflow.com/questions/14427808/you-dont-have-permission-error-in-apache-in-centos Even after following the above instructions, when I run the following command: sudo -u apache ls /home/my_user/public_html The server responds with ls: cannot open directory /home/my_user/public_html: Permission denied Even so, I do not get a permissions denied error when I try to access my site any more, however, now I am redirected to the default page of apache instead of my website. I am not exactly sure what's wrong any more, if anyone has an idea, it would be great if you guys could help out!

    Read the article

  • Set primary group of file or directory on Samba share from Windows

    - by Hubert Kario
    Short version: I have such situation on a Samba share: $ ls -lha total 12K drwxr-xr-x 3 hka Domain Users 4.0K Jan 11 17:07 . drwxrwxrwt 19 root root 4.0K Jan 11 17:06 .. drwxr-xr-x 2 hka Domain Users 4.0K Jan 11 17:07 dir A -rw-r--r-- 1 hka Domain Users 0 Jan 11 17:07 file A How am I able to change this to following using only Windows SMB/CIFS client (using 3rd party applications is OK) $ ls -lha total 12K drwxr-xr-x 3 hka Domain Users 4.0K Jan 11 17:07 . drwxrwxrwt 19 root root 4.0K Jan 11 17:06 .. drwxr-xr-x 2 hka ntpoweruser 4.0K Jan 11 17:07 dir A -rw-r--r-- 1 hka ntpoweruser 0 Jan 11 17:07 file A Rationale and background info I'm using POSIX ACLs on Samba shares. Together with acl group control for Samba, it allows me to delegate management of permissions to different users based on group membership. Thing is, when I create a new file on a Samba share, I'm unable to set its primary group (the one that grants permission to change its permissions). It's being set to my primary group (Domain Users) or group set using force group option in smb.conf share definition. Removing all groups in windows except the one I want to become the new primary group doesn't work. I can change it using chgrp group folder/ as regular user though shell, but it's suboptimal (not all users are *nix users). Trying to set new owner to group from Windows file permission window makes the Samba to return permission denied with following log entry: [2012/01/05 21:13:03.349734, 3] smbd/nttrans.c:1899(call_nt_transact_set_security_desc) call_nt_transact_set_security_desc: file = projects/project A/New folder, sent 0x1 [2012/01/05 21:13:03.349774, 3] smbd/posix_acls.c:1208(unpack_nt_owners) unpack_nt_owners: unable to validate owner sid for S-1-5-21-4526631811-884521863-452487935-11025 [2012/01/05 21:13:03.349804, 3] smbd/error.c:80(error_packet_set) error packet at smbd/nttrans.c(1909) cmd=160 (SMBnttrans) NT_STATUS_INVALID_OWNER The SID is correct and belongs to group I specified in GUI.

    Read the article

  • How to safely use grub rescue> in Fedora 16? System does not boot anymore

    - by YumYumYum
    When i boot my PC, i get this in my Fedora 16 distro. I have tried as following but none allowing me to boot anymore. Any help please? I am blocked completely. Grub loading. Welcome to GRUB! error: file not found. Entering rescue mode... grub rescue> grub rescue> ls (hd0) (hd0,gpt3) (hd0,gpt2) (hd0,gpt1) grub rescue> ls (hd0,gpt2)/ ./ ../ lost+found/ memtest86+-4.20 grub2/ System.map-3.1.0-0.rc3.git0.0.fc16.i686 config 3.1.0.0.rc3.git0.0.fc16.i686 grub/ vmlinuz-3.1.0.0.rc3.git0.0.fc16.i686 elf-memtest86+-4.20 initramfs-3.1.0.0.rc3.git0.0.fc16.i686.img initramfs-3.1.0.0.rc4.git0.0.fc16.i686.img System.mpa-3.1.0.0.rc3.git0.0.fc16.i686 config-3.1.0.0.rc3.git0.0.fc16.i686 vmlinuz-3.1.0.0.rc3.git0.0.fc16.i686 grub rescue> set prefix=(hd0,gpt2)/boot/grub grub rescue> set root=(hd0,gpt2) grub rescue>insmod normal error unknown filesystem. or sometimes "error: file not found." grub rescue>normal unknown command normal

    Read the article

  • Can't do anything with Ext. HDD, ".Trashes" is probably the boogyman. (On a MAC!)

    - by Sander Schaeffer
    I bought a external Harddrive today, A Samsung. But I'm not able to do anything with it A few notes on that. I can't put anything on the harddrive. It keeps on 'preparing copying files' I can delete anything on the harddrive system files, except the folder ".Trashes". It gives error 'Unexpected error: -50' I've tried to empty my own trashcan, no changes. I've set the file permission on the .Trashes to read/write everyone, doesn;t change a thing Trying to format the whole drive with DiskUtility, but quits at start, because the drive cannot be deactivated I've tried a few terminal commands sudo -s -r rf /Volumes/Untitled\ 1/.Trashes - Directory not empty -r rf /Volumes/Untitled\ 1/.Trashes - no permissions Also cd /Volumes ls -al cd name_of_partition ls -al -rm -rf .Trashes Again: Permission error. Also: I can't change drive permissions via Disk Utility, via the button 'recover drive permissions', because it is 'blank' I really can't figure out how to delete .Trashes, format the drive or get the damn thing working. Any suggestions? p.s. If this is the wrong Stack Exchange site: Please redirect me!

    Read the article

  • Executing a git command using remote powershell results in a NativeCommmandError

    - by user204777
    I am getting an error while executing a remote PowerShell script. From my local machine I am running a PowerShell script that uses Invoke-Command to cd into a directory on a remote Amazon Windows Server instance, and a subsequent Invoke-Command to execute script that lives on that server instance. The script on the server is trying to git clone a repository from GitHub. I can successfully do things in the server script like "ls" or even "git --version". However git clone, git pull, etc. result in the following error: Cloning into 'MyRepo'... + CategoryInfo : NotSpecified: (Cloning into 'MyRepo'...:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError This is my first time using PowerShell or a Windows Server. Can anyone provide some direction on this problem. The client script: $s = new-pssession -computername $server -credential $user invoke-command -session $s -scriptblock { cd C:\Repos; ls } invoke-command -session $s -scriptblock { param ($repo, $branch) & '.\clone.ps1' -repository $repo -branch $branch} -ArgumentList $repository, $branch exit-pssession The server script: param([string]$repository = "repository", [string]$branch = "branch") git --version start-process -FilePath git -ArgumentList ("clone", "-b $branch https://github.com/MyGithub/$repository.git") -Wait I've changed the server script to use start process and it is no longer throwing the exception. It creates the new repository directory and the .git directory but doesn't write any of the files from the github repository. This smells like a permissions issue. Once again invoking the script manually (remote desktop into the amazon box and execute it from powershell) works like a charm.

    Read the article

  • setting up bind to work with nsupdate (SERVFAIL)

    - by funny_ha_ha
    I'm trying to update my DNS-Server dynamically using nsupdate. Prerequisite I'm using Debian 6 on my DNS-Server and Debian 4 on my client. I created a public/private key pair using: dnssec-keygen -C -a HMAC-MD5 -b 512 -n USER sub.example.com. I then edited my named.conf.local to contain my public key and the new zone i wish to update. It now looks like this (note: I also tried allow-update { any; }; without success): zone "example.com" { type master; file "/etc/bind/primary/example.com"; notify yes; allow-update { none; }; allow-query { any; }; }; zone "sub.example.com" { type master; file "/etc/bind/primary/sub.example.com"; notify yes; allow-update { key "sub.example.com."; }; allow-query { any; }; }; key sub.example.com. { algorithm HMAC-MD5; secret "xxxx xxxx"; }; Next, I copied the private key file (key.private) to another server I want to update the zone from. I also created a textfile (update) on this server which contained the update information (note: I tried toying around with this stuff too. no success): server example.com zone sub.example.com update add sub.example.com. 86400 A 10.10.10.1 show send Now I'm trying to update the zone using: nsupdate -k key.private -v update The Problem Said command gives me the following output: Outgoing update query: ;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 0 ;; flags: ; ZONE: 0, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0 ;; ZONE SECTION: ;sub.example.com. IN SOA ;; UPDATE SECTION: sub.example.com. 86400 IN A 10.10.10.1 update failed: SERVFAIL named debug Level 3 gives me the following information when I issue the nsupdate command on the remote server (note: I obfuscated the client IP): 06-Aug-2012 14:51:33.977 client X.X.X.X#33182: new TCP connection 06-Aug-2012 14:51:33.977 client X.X.X.X#33182: replace 06-Aug-2012 14:51:33.978 clientmgr @0x2ada3c7ee760: createclients 06-Aug-2012 14:51:33.978 clientmgr @0x2ada3c7ee760: recycle 06-Aug-2012 14:51:33.978 client @0x2ada475f1120: accept 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: read 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: TCP request 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: request has valid signature 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: recursion not available 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: update 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: send 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: sendto 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: senddone 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: next 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: endrequest 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: read 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: next 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: request failed: end of file 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: endrequest 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: closetcp But it doesn't do anything. The zone isn't updated, nor does my nsupdate change anything. I'm not sure if the file /etc/bind/primary/sub.example.com should exist prior to the first update or not. I tried it without the file, with an empty file and with a pre-configured zone file. Without success. The sparse information I found on the net pointed me towards file and folder permissions regarding the bind working directory, so I changed the permissions of both /etc/bind and /var/cache/bind (which is the home dir of my "bind" user). I'm not a 100% sure if the permissions are correct.. but it looks good to me: ls -lah /var/cache/bind/ total 224K drwxrwxr-x 2 bind bind 4.0K Aug 6 03:13 . drwxr-xr-x 12 root root 4.0K Jul 21 11:27 .. -rw-r--r-- 1 bind bind 211K Aug 6 03:21 named.run ls -lah /etc/bind/ total 72K drwxr-sr-x 3 bind bind 4.0K Aug 6 14:41 . drwxr-xr-x 87 root root 4.0K Jul 30 01:24 .. -rw------- 1 bind bind 125 Aug 6 02:54 key.public -rw------- 1 bind bind 156 Aug 6 02:54 key.private -rw-r--r-- 1 bind bind 2.5K Aug 6 03:07 bind.keys -rw-r--r-- 1 bind bind 237 Aug 6 03:07 db.0 -rw-r--r-- 1 bind bind 271 Aug 6 03:07 db.127 -rw-r--r-- 1 bind bind 237 Aug 6 03:07 db.255 -rw-r--r-- 1 bind bind 353 Aug 6 03:07 db.empty -rw-r--r-- 1 bind bind 270 Aug 6 03:07 db.local -rw-r--r-- 1 bind bind 3.0K Aug 6 03:07 db.root -rw-r--r-- 1 bind bind 493 Aug 6 03:32 named.conf -rw-r--r-- 1 bind bind 490 Aug 6 03:07 named.conf.default-zones -rw-r--r-- 1 bind bind 1.2K Aug 6 14:18 named.conf.local -rw-r--r-- 1 bind bind 666 Jul 29 22:51 named.conf.options drwxr-sr-x 2 bind bind 4.0K Aug 6 03:57 primary/ -rw-r----- 1 root bind 77 Mar 19 02:57 rndc.key -rw-r--r-- 1 bind bind 1.3K Aug 6 03:07 zones.rfc1918 ls -lah /etc/bind/primary/ total 20K drwxr-sr-x 2 bind bind 4.0K Aug 6 03:57 . drwxr-sr-x 3 bind bind 4.0K Aug 6 14:41 .. -rw-r--r-- 1 bind bind 356 Jul 30 00:45 example.com

    Read the article

  • How to make sure that grub does use menu.lst?

    - by Glen S. Dalton
    On my Ubuntu 9.04 ("Karmic") laptop I suspect grub does not use the /boot/grub/menu.lst file. What happens on boot is that I see a blank screen and nothing happens. When I press ESC I see a boot list which is different from what I would expect from the menu.lst file. The menu lines are different and when I choose the first entry it does not use the kernel options that are in the first entry in menu.lst. Where do the entries that grub uses come from? How can I find out what happens, is there a log? I could not find anything in /var/log/syslog or /var/log/dmesg about grub using a menu.lst. How can I set it to work like expected? Some Files: $ sudo ls -la /boot/grub/*lst -rw-r--r-- 1 root root 1558 2009-12-12 15:25 /boot/grub/command.lst -rw-r--r-- 1 root root 121 2009-12-12 15:25 /boot/grub/fs.lst -rw-r--r-- 1 root root 272 2009-12-12 15:25 /boot/grub/handler.lst -rw-r--r-- 1 root root 4576 2010-03-19 11:26 /boot/grub/menu.lst -rw-r--r-- 1 root root 1657 2009-12-12 15:25 /boot/grub/moddep.lst -rw-r--r-- 1 root root 62 2009-12-12 15:25 /boot/grub/partmap.lst -rw-r--r-- 1 root root 22 2009-12-12 15:25 /boot/grub/parttool.lst $ sudo ls -la /vm* lrwxrwxrwx 1 root root 30 2009-12-12 16:15 /vmlinuz -> boot/vmlinuz-2.6.31-16-generic lrwxrwxrwx 1 root root 30 2009-12-12 14:07 /vmlinuz.old -> boot/vmlinuz-2.6.31-14-generic $ sudo ls -la /init* lrwxrwxrwx 1 root root 33 2009-12-12 16:15 /initrd.img -> boot/initrd.img-2.6.31-16-generic lrwxrwxrwx 1 root root 33 2009-12-12 14:07 /initrd.img.old -> boot/initrd.img-2.6.31-14-generic The only menu.lst that I found: $ sudo find / -name "menu.lst" /boot/grub/menu.lst $ sudo cat /boot/grub/menu.lst # menu.lst - See: grub(8), info grub, update-grub(8) # grub-install(8), grub-floppy(8), # grub-md5-crypt, /usr/share/doc/grub # and /usr/share/doc/grub-doc/. ## default num # Set the default entry to the entry number NUM. Numbering starts from 0, and # the entry number 0 is the default if the command is not used. # # You can specify 'saved' instead of a number. In this case, the default entry # is the entry saved with the command 'savedefault'. # WARNING: If you are using dmraid do not use 'savedefault' or your # array will desync and will not let you boot your system. default 0 ## timeout sec # Set a timeout, in SEC seconds, before automatically booting the default entry # (normally the first entry defined). timeout 3 ## hiddenmenu # Hides the menu by default (press ESC to see the menu) #hiddenmenu # Pretty colours color cyan/blue white/blue ## password ['--md5'] passwd # If used in the first section of a menu file, disable all interactive editing # control (menu entry editor and command-line) and entries protected by the # command 'lock' # e.g. password topsecret # password --md5 $1$gLhU0/$aW78kHK1QfV3P2b2znUoe/ # password topsecret # examples # # title Windows 95/98/NT/2000 # root (hd0,0) # makeactive # chainloader +1 # # title Linux # root (hd0,1) # kernel /vmlinuz root=/dev/hda2 ro # Put static boot stanzas before and/or after AUTOMAGIC KERNEL LIST ### BEGIN AUTOMAGIC KERNELS LIST ## lines between the AUTOMAGIC KERNELS LIST markers will be modified ## by the debian update-grub script except for the default options below ## DO NOT UNCOMMENT THEM, Just edit them to your needs ## ## Start Default Options ## ## default kernel options ## default kernel options for automagic boot options ## If you want special options for specific kernels use kopt_x_y_z ## where x.y.z is kernel version. Minor versions can be omitted. ## e.g. kopt=root=/dev/hda1 ro ## kopt_2_6_8=root=/dev/hdc1 ro ## kopt_2_6_8_2_686=root=/dev/hdc2 ro # kopt=root=UUID=9b454298-18e1-43f7-a5bc-f56e7ed5f9c6 ro noresume ## default grub root device ## e.g. groot=(hd0,0) # groot=70fcd2b0-0ee0-4fe6-9acb-322ef74c1cdf ## should update-grub create alternative automagic boot options ## e.g. alternative=true ## alternative=false # alternative=true ## should update-grub lock alternative automagic boot options ## e.g. lockalternative=true ## lockalternative=false # lockalternative=false ## additional options to use with the default boot option, but not with the ## alternatives ## e.g. defoptions=vga=791 resume=/dev/hda5 ## defoptions=quiet splash # defoptions=apm=on acpi=off ## should update-grub lock old automagic boot options ## e.g. lockold=false ## lockold=true # lockold=false ## Xen hypervisor options to use with the default Xen boot option # xenhopt= ## Xen Linux kernel options to use with the default Xen boot option # xenkopt=console=tty0 ## altoption boot targets option ## multiple altoptions lines are allowed ## e.g. altoptions=(extra menu suffix) extra boot options ## altoptions=(recovery) single # altoptions=(recovery mode) single ## controls how many kernels should be put into the menu.lst ## only counts the first occurence of a kernel, not the ## alternative kernel options ## e.g. howmany=all ## howmany=7 # howmany=all ## specify if running in Xen domU or have grub detect automatically ## update-grub will ignore non-xen kernels when running in domU and vice versa ## e.g. indomU=detect ## indomU=true ## indomU=false # indomU=detect ## should update-grub create memtest86 boot option ## e.g. memtest86=true ## memtest86=false # memtest86=true ## should update-grub adjust the value of the default booted system ## can be true or false # updatedefaultentry=false ## should update-grub add savedefault to the default options ## can be true or false # savedefault=false ## ## End Default Options ## title Ubuntu 9.10, kernel 2.6.31-14-generic noresume uuid 70fcd2b0-0ee0-4fe6-9acb-322ef74c1cdf kernel /vmlinuz-2.6.31-14-generic root=UUID=9b454298-18e1-43f7-a5bc-f56e7ed5f9c6 ro quiet splash apm=on acpi=off noresume initrd /initrd.img-2.6.31-14-generic title Ubuntu 9.10, kernel 2.6.31-14-generic (recovery mode) uuid 70fcd2b0-0ee0-4fe6-9acb-322ef74c1cdf kernel /vmlinuz-2.6.31-14-generic root=UUID=9b454298-18e1-43f7-a5bc-f56e7ed5f9c6 ro sing le initrd /initrd.img-2.6.31-14-generic title Ubuntu 9.10, memtest86+ uuid 70fcd2b0-0ee0-4fe6-9acb-322ef74c1cdf kernel /memtest86+.bin ### END DEBIAN AUTOMAGIC KERNELS LIST These are the choices that grub displays after i press ESC: Ubuntu, Linux 2-6-31-16-generic Ubuntu, Linux 2-6-31-16-generic (recovery mode) Ubuntu, Linux 2-6-31-14-generic Ubuntu, Linux 2-6-31-14-generic (recovery mode) Memory test (memtest86+) Memory test (memtest86+, serial console 115200)

    Read the article

  • VFS: file-max limit 1231582 reached

    - by Rick Koshi
    I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23 Yes, my system can't consistently run an 'ls' command. :( I note several errors in my dmesg output: # dmesg | tail [2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null) [2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000] [4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null) [4982666.631444] VFS: file-max limit 1231582 reached [4982666.764240] VFS: file-max limit 1231582 reached [4982767.360574] VFS: file-max limit 1231582 reached [4982901.904628] VFS: file-max limit 1231582 reached [4982964.930556] VFS: file-max limit 1231582 reached [4982966.352170] VFS: file-max limit 1231582 reached [4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000] Obviously, the file-max errors look suspicious, being clustered together and recent. # cat /proc/sys/fs/file-max 1231582 # cat /proc/sys/fs/file-nr 1231712 0 1231582 That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network. # lsof | wc 16046 148253 1882901 # ps -ef | wc 574 6104 44260 I saw some documentation saying: file-max & file-nr: The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached". My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open. Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back) Thanks in advance for any help. And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open. After the reboot, I found that /proc/sys/fs/file-nr showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause. I'm still looking for ideas, since I definitely expect it to happen again. And another update, the next day: I watched the system carefully, and discovered that /proc/sys/fs/file-nr showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further. If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.

    Read the article

  • ssh login successful, but scp password gives me "Permission denied"

    - by YANewb
    I'm trying to get some blogging software up on an organizational remote server. I tried to set up a SSH Key but was having problems and decided that getting the blog up and running was more important than dealing with the SSH Key issue, so I ssh-keygen -R remoteserver.com. Now I can successfully login with ssh -v [email protected] and the correct password. Once logged in I can move around and read any file and directory that I should be able to read. But when I try to edit an existing -rw-r--r-- file with VIM, it shows up as read-only, if I try to edit permissions I get chmod: file.ext: Operation not permitted, and if I try to scp a new file from my local machine I'm prompted for the remote user's password, and then get scp: /home/path/to/file.ext: Permission denied. Since I didn't have any of these problems before I tried to set up the ssh key, I suspect these anomalies are a side effect of that, but I don't know how to troubleshoot this. So what does a foolish server-newb, such as myself, need to do to get edit capability back as a remote user? Addendum 1: My userids are different between my local machine and the remote server. For ssh I ssh -v [email protected]. if I whoami I get remoteuser For scp I scp file.ext [email protected]:/path/to/file.ext from the local directory with file.ext while logged in as the local user. if I whoami I get localuser The ls -l for two different files I've tried scp: -rw-r--r--@ 1 localuser localgroup 20 Feb 11 21:03 phpinfo.php -rw-r--r-- 1 root localgroup 4 Feb 11 22:32 test.txt The ls -l for the file I've tried to VIM: -rw-r--r-- 1 remoteuser remotegroup 76 Jul 27 2009 info.txt Addendum 2: In the past I've set up ssh-keys for git repositories. I don't want to completely destroy them, so in an attempt to follow a deer's train of thinking I renamed my ~/.ssh/ to ~/.ssh-bak/, then tested the different types of access. The abridged version of the terminal commands and results is below; I think everything is working until the 8th line from the end. localcomputer:~ localuser$ ssh -v [email protected] OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to remoteserver.com [###.###.###.###] port 22. debug1: Connection established. debug1: identity file /Users/localuser/.ssh/identity type -1 debug1: identity file /Users/localuser/.ssh/id_rsa type -1 debug1: identity file /Users/localuser/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p2 FreeBSD-20110503 debug1: match: OpenSSH_5.8p2 FreeBSD-20110503 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY The authenticity of host 'remoteserver.com (###.###.###.###)' can't be established. RSA key fingerprint is ##:##:##:##:##:##:##:##:##:##:##:##:##:##:##:##. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'remoteserver.com,###.###.###.###' (RSA) to the list of known hosts. debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/localuser/.ssh/identity debug1: Trying private key: /Users/localuser/.ssh/id_rsa debug1: Trying private key: /Users/localuser/.ssh/id_dsa debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Last login: Sun Feb 12 18:00:54 2012 from 68.69.164.123 FreeBSD 6.4-RELEASE-p8 (VKERN) #1 r101746: Mon Aug 30 10:34:40 MDT 2010 [remoteuser@remoteserver /home]$ ls -l total ### -rw-r--r-- 1 remoteuser remotegroup 76 Aug 12 2009 info.txt [remoteuser@remoteserver /home]$ vim info.txt ~ {at the bottom of the VIM screen it tells me it's [read only]} [remoteuser@remoteserver /home]$ whoami remoteuser [remoteuser@remoteserver /home]$ logout debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0 debug1: channel 0: free: client-session, nchannels 1 Connection to remoteserver.com closed. Transferred: sent 3872, received 12496 bytes, in 107.4 seconds Bytes per second: sent 36.1, received 116.4 debug1: Exit status 0 localcomputer:localdirectory name$ scp -v phpinfo.php [email protected]:/home/www/remotedirectory/phpinfo.php Executing: program /usr/bin/ssh host remoteserver.com, user remoteuser, command scp -v -t /home/www/remotedirectory/phpinfo.php OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to remoteserver.com [###.###.###.###] port 22. debug1: Connection established. debug1: identity file /Users/localuser/.ssh/identity type -1 debug1: identity file /Users/localuser/.ssh/id_rsa type -1 debug1: identity file /Users/localuser/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p2 FreeBSD-20110503 debug1: match: OpenSSH_5.8p2 FreeBSD-20110503 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'remoteserver.com' is known and matches the RSA host key. debug1: Found key in /Users/localuser/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/localuser/.ssh/identity debug1: Trying private key: /Users/localuser/.ssh/id_rsa debug1: Trying private key: /Users/localuser/.ssh/id_dsa debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending command: scp -v -t /home/www/remotedirectory/phpinfo.php Sending file modes: C0644 20 phpinfo.php Sink: C0644 20 phpinfo.php scp: /home/www/remotedirectory/phpinfo.php: Permission denied debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: channel 0: free: client-session, nchannels 1 debug1: fd 0 clearing O_NONBLOCK debug1: fd 1 clearing O_NONBLOCK Transferred: sent 1456, received 2160 bytes, in 0.6 seconds Bytes per second: sent 2322.3, received 3445.1 debug1: Exit status 1

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • Oracle Linux Tips and Tricks: Using SSH

    - by Robert Chase
    Out of all of the utilities available to systems administrators ssh is probably the most useful of them all. Not only does it allow you to log into systems securely, but it can also be used to copy files, tunnel IP traffic and run remote commands on distant servers. It’s truly the Swiss army knife of systems administration. Secure Shell, also known as ssh, was developed in 1995 by Tau Ylonen after the University of Technology in Finland suffered a password sniffing attack. Back then it was common to use tools like rcp, rsh, ftp and telnet to connect to systems and move files across the network. The main problem with these tools is they provide no security and transmitted data in plain text including sensitive login credentials. SSH provides this security by encrypting all traffic transmitted over the wire to protect from password sniffing attacks. One of the more common use cases involving SSH is found when using scp. Secure Copy (scp) transmits data between hosts using SSH and allows you to easily copy all types of files. The syntax for the scp command is: scp /pathlocal/filenamelocal remoteuser@remotehost:/pathremote/filenameremote In the following simple example, I move a file named myfile from the system test1 to the system test2. I am prompted to provide valid user credentials for the remote host before the transfer will proceed.  If I were only using ftp, this information would be unencrypted as it went across the wire.  However, because scp uses SSH, my user credentials and the file and its contents are confidential and remain secure throughout the transfer.  [user1@test1 ~]# scp /home/user1/myfile user1@test2:/home/user1user1@test2's password: myfile                                    100%    0     0.0KB/s   00:00 You can also use ssh to send network traffic and utilize the encryption built into ssh to protect traffic over the wire. This is known as an ssh tunnel. In order to utilize this feature, the server that you intend to connect to (the remote system) must have TCP forwarding enabled within the sshd configuraton. To enable TCP forwarding on the remote system, make sure AllowTCPForwarding is set to yes and enabled in the /etc/ssh/sshd_conf file: AllowTcpForwarding yes Once you have this configured, you can connect to the server and setup a local port which you can direct traffic to that will go over the secure tunnel. The following command will setup a tunnel on port 8989 on your local system. You can then redirect a web browser to use this local port, allowing the traffic to go through the encrypted tunnel to the remote system. It is important to select a local port that is not being used by a service and is not restricted by firewall rules.  In the following example the -D specifies a local dynamic application level port forwarding and the -N specifies not to execute a remote command.   ssh –D 8989 [email protected] -N You can also forward specific ports on both the local and remote host. The following example will setup a port forward on port 8080 and forward it to port 80 on the remote machine. ssh -L 8080:farwebserver.com:80 [email protected] You can even run remote commands via ssh which is quite useful for scripting or remote system administration tasks. The following example shows how to  log in remotely and execute the command ls –la in the home directory of the machine. Because ssh encrypts the traffic, the login credentials and output of the command are completely protected while they travel over the wire. [rchase@test1 ~]$ ssh rchase@test2 'ls -la'rchase@test2's password: total 24drwx------  2 rchase rchase 4096 Sep  6 15:17 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc You can execute any command contained in the quotations marks as long as you have permission with the user account that you are using to log in. This can be very powerful and useful for collecting information for reports, remote controlling systems and performing systems administration tasks using shell scripts. To make your shell scripts even more useful and to automate logins you can use ssh keys for running commands remotely and securely without the need to enter a password. You can accomplish this with key based authentication. The first step in setting up key based authentication is to generate a public key for the system that you wish to log in from. In the following example you are generating a ssh key on a test system. In case you are wondering, this key was generated on a test VM that was destroyed after this article. [rchase@test1 .ssh]$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/rchase/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/rchase/.ssh/id_rsa.Your public key has been saved in /home/rchase/.ssh/id_rsa.pub.The key fingerprint is:7a:8e:86:ef:59:70:ef:43:b7:ee:33:03:6e:6f:69:e8 rchase@test1The key's randomart image is:+--[ RSA 2048]----+|                 ||  . .            ||   o .           ||    . o o        ||   o o oS+       ||  +   o.= =      ||   o ..o.+ =     ||    . .+. =      ||     ...Eo       |+-----------------+ Now that you have the key generated on the local system you should to copy it to the target server into a temporary location. The user’s home directory is fine for this. [rchase@test1 .ssh]$ scp id_rsa.pub rchase@test2:/home/rchaserchase@test2's password: id_rsa.pub                  Now that the file has been copied to the server, you need to append it to the authorized_keys file. This should be appended to the end of the file in the event that there are other authorized keys on the system. [rchase@test2 ~]$ cat id_rsa.pub >> .ssh/authorized_keys Once the process is complete you are ready to login. Since you are using key based authentication you are not prompted for a password when logging into the system.   [rchase@test1 ~]$ ssh test2Last login: Fri Sep  6 17:42:02 2013 from test1 This makes it much easier to run remote commands. Here’s an example of the remote command from earlier. With no password it’s almost as if the command ran locally. [rchase@test1 ~]$ ssh test2 'ls -la'total 32drwx------  3 rchase rchase 4096 Sep  6 17:40 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc As a security consideration it's important to note the permissions of .ssh and the authorized_keys file.  .ssh should be 700 and authorized_keys should be set to 600.  This prevents unauthorized access to ssh keys from other users on the system.   An even easier way to move keys back and forth is to use ssh-copy-id. Instead of copying the file and appending it manually to the authorized_keys file, ssh-copy-id does both steps at once for you.  Here’s an example of moving the same key using ssh-copy-id.The –i in the example is so that we can specify the path to the id file, which in this case is /home/rchase/.ssh/id_rsa.pub [rchase@test1]$ ssh-copy-id -i /home/rchase/.ssh/id_rsa.pub rchase@test2 One of the last tips that I will cover is the ssh config file. By using the ssh config file you can setup host aliases to make logins to hosts with odd ports or long hostnames much easier and simpler to remember. Here’s an example entry in our .ssh/config file. Host dev1 Hostname somereallylonghostname.somereallylongdomain.com Port 28372 User somereallylongusername12345678 Let’s compare the login process between the two. Which would you want to type and remember? ssh somereallylongusername12345678@ somereallylonghostname.somereallylongdomain.com –p 28372 ssh dev1 I hope you find these tips useful.  There are a number of tools used by system administrators to streamline processes and simplify workflows and whether you are new to Linux or a longtime user, I'm sure you will agree that SSH offers useful features that can be used every day.  Send me your comments and let us know the ways you  use SSH with Linux.  If you have other tools you would like to see covered in a similar post, send in your suggestions.

    Read the article

  • I am trying to deploy my first rails app using Capistrano and am getting an error.

    - by Andrew Bucknell
    My deployment of a rails app with capistrano is failing and I hoping someone can provide me with pointers to troubleshoot. The following is the command output andrew@melb-web:~/projects/rails/guestbook2$ cap deploy:setup * executing `deploy:setup' * executing "mkdir -p /var/www/dev/guestbook2 /var/www/dev/guestbook2/releases /var/www/dev/guestbook2/shared /var/www/dev/guestbook2/shared/system /var/www/dev/guestbook2/shared/log /var/www/dev/guestbook2/shared/pids && chmod g+w /var/www/dev/guestbook2 /var/www/dev/guestbook2/releases /var/www/dev/guestbook2/shared /var/www/dev/guestbook2/shared/system /var/www/dev/guestbook2/shared/log /var/www/dev/guestbook2/shared/pids" servers: ["dev.andrewbucknell.com"] Enter passphrase for /home/andrew/.ssh/id_dsa: Enter passphrase for /home/andrew/.ssh/id_dsa: [dev.andrewbucknell.com] executing command command finished andrew@melb-web:~/projects/rails/guestbook2$ cap deploy:check * executing `deploy:check' * executing "test -d /var/www/dev/guestbook2/releases" servers: ["dev.andrewbucknell.com"] Enter passphrase for /home/andrew/.ssh/id_dsa: [dev.andrewbucknell.com] executing command command finished * executing "test -w /var/www/dev/guestbook2" servers: ["dev.andrewbucknell.com"] [dev.andrewbucknell.com] executing command command finished * executing "test -w /var/www/dev/guestbook2/releases" servers: ["dev.andrewbucknell.com"] [dev.andrewbucknell.com] executing command command finished * executing "which git" servers: ["dev.andrewbucknell.com"] [dev.andrewbucknell.com] executing command command finished * executing "test -w /var/www/dev/guestbook2/shared" servers: ["dev.andrewbucknell.com"] [dev.andrewbucknell.com] executing command command finished You appear to have all necessary dependencies installed andrew@melb-web:~/projects/rails/guestbook2$ cap deploy:migrations * executing `deploy:migrations' * executing `deploy:update_code' updating the cached checkout on all servers executing locally: "git ls-remote [email protected]:/home/andrew/git/guestbook2.git master" Enter passphrase for key '/home/andrew/.ssh/id_dsa': * executing "if [ -d /var/www/dev/guestbook2/shared/cached-copy ]; then cd /var/www/dev/guestbook2/shared/cached-copy && git fetch origin && git reset --hard 369c5e04aaf83ad77efbfba0141001ac90915029 && git clean -d -x -f; else git clone [email protected]:/home/andrew/git/guestbook2.git /var/www/dev/guestbook2/shared/cached-copy && cd /var/www/dev/guestbook2/shared/cached-copy && git checkout -b deploy 369c5e04aaf83ad77efbfba0141001ac90915029; fi" servers: ["dev.andrewbucknell.com"] Enter passphrase for /home/andrew/.ssh/id_dsa: [dev.andrewbucknell.com] executing command ** [dev.andrewbucknell.com :: err] Permission denied, please try again. ** Permission denied, please try again. ** Permission denied (publickey,password). ** [dev.andrewbucknell.com :: err] fatal: The remote end hung up unexpectedly ** [dev.andrewbucknell.com :: out] Initialized empty Git repository in /var/www/dev/guestbook2/shared/cached-copy/.git/ command finished failed: "sh -c 'if [ -d /var/www/dev/guestbook2/shared/cached-copy ]; then cd /var/www/dev/guestbook2/shared/cached-copy && git fetch origin && git reset --hard 369c5e04aaf83ad77efbfba0141001ac90915029 && git clean -d -x -f; else git clone [email protected]:/home/andrew/git/guestbook2.git /var/www/dev/guestbook2/shared/cached-copy && cd /var/www/dev/guestbook2/shared/cached-copy && git checkout -b deploy 369c5e04aaf83ad77efbfba0141001ac90915029; fi'" on dev.andrewbucknell.com andrew@melb-web:~/projects/rails/guestbook2$ The following fragment is from cap -d deploy:migrations Preparing to execute command: "find /var/www/dev/guestbook2/releases/20100305124415/public/images /var/www/dev/guestbook2/releases/20100305124415/public/stylesheets /var/www/dev/guestbook2/releases/20100305124415/public/javascripts -exec touch -t 201003051244.22 {} ';'; true" Execute ([Yes], No, Abort) ? |y| yes * executing `deploy:migrate' * executing "ls -x /var/www/dev/guestbook2/releases" Preparing to execute command: "ls -x /var/www/dev/guestbook2/releases" Execute ([Yes], No, Abort) ? |y| yes /usr/lib/ruby/gems/1.8/gems/capistrano-2.5.17/lib/capistrano/recipes/deploy.rb:55:in `join': can't convert nil into String (TypeError) from /usr/lib/ruby/gems/1.8/gems/capistrano-2.5.17/lib/capistrano/recipes/deploy.rb:55:in `load'

    Read the article

  • using 'ar' tool in Alchemy

    - by paleozogt
    I've found that if you specify a path to Alchemy's 'ar' tool, it won't create the 'l.bc' file necessary to link the library. For example, here is the case when I don't specify a path (it works): asimmons-mac:test asimmons$ echo 'int main() { return 42; }' > testmain.cpp asimmons-mac:test asimmons$ echo 'int test1() { return -1; }' > test1.cpp asimmons-mac:test asimmons$ echo 'int test2() { return 1; }' > test2.cpp asimmons-mac:test asimmons$ g++ -c testmain.cpp asimmons-mac:test asimmons$ g++ -c test1.cpp asimmons-mac:test asimmons$ g++ -c test2.cpp asimmons-mac:test asimmons$ ar cr libtest.a test1.o test2.o asimmons-mac:test asimmons$ g++ testmain.cpp libtest.a llvm-ld, "-o=".(98237.achacks.o = "98237.achacks.exe"), -disable-opt -internalize-public-api-list=_start,malloc,free,__adddi3,__anddi3,__ashldi3,__ashrdi3,__cmpdi2,__divdi3,__fixdfdi,__fixsfdi,__fixunsdfdi,__fixunssfdi,__floatdidf,__floatdisf,__floatunsdidf,__iordi3,__lshldi3,__lshrdi3,__moddi3,__muldi3,__negdi2,__one_cmpldi2,__qdivrem,__adddi3,__anddi3,__ashldi3,__ashrdi3,__cmpdi2,__divdi3,__qdivrem,__fixdfdi,__fixsfdi,__fixunsdfdi,__fixunssfdi,__floatdidf,__floatdisf,__floatunsdidf,__iordi3,__lshldi3,__lshrdi3,__moddi3,__muldi3,__negdi2,__one_cmpldi2,__subdi3,__ucmpdi2,__udivdi3,__umoddi3,__xordi3,__subdi3,__ucmpdi2,__udivdi3,__umoddi3,__xordi3,__error /Users/asimmons/Development/alchemy-darwin-v0.5a/avm2-libc/lib/avm2-libc.l.bc /Users/asimmons/Development/alchemy-darwin-v0.5a/avm2-libc/lib/avm2-libstdc++.l.bc, test.l.bc 98237.achacks.o 98237.achacks.swf, 5593510 bytes written asimmons-mac:test asimmons$ ls -l total 10992 -rwxr-xr-x 1 asimmons staff 5593575 Apr 9 17:44 a.exe -rw------- 1 asimmons staff 1284 Apr 9 17:43 libtest.a -rw-r--r-- 1 asimmons staff 672 Apr 9 17:43 test.l.bc -rw-r--r-- 1 asimmons staff 27 Apr 9 17:43 test1.cpp -rwxr-xr-x 1 asimmons staff 536 Apr 9 17:43 test1.o -rw-r--r-- 1 asimmons staff 26 Apr 9 17:43 test2.cpp -rwxr-xr-x 1 asimmons staff 536 Apr 9 17:43 test2.o -rw-r--r-- 1 asimmons staff 26 Apr 9 17:43 testmain.cpp -rwxr-xr-x 1 asimmons staff 552 Apr 9 17:43 testmain.o asimmons-mac:test asimmons$ And here is an example where I do specify a path (it doesn't work). I try to tell 'ar' to put the library under 'lib' and then link to lib/libtest.a: asimmons-mac:test asimmons$ mkdir lib asimmons-mac:test asimmons$ echo 'int main() { return 42; }' > testmain.cpp asimmons-mac:test asimmons$ echo 'int test1() { return -1; }' > test1.cpp asimmons-mac:test asimmons$ echo 'int test2() { return 1; }' > test2.cpp asimmons-mac:test asimmons$ g++ -c testmain.cpp asimmons-mac:test asimmons$ g++ -c test1.cpp asimmons-mac:test asimmons$ g++ -c test2.cpp asimmons-mac:test asimmons$ ar cr lib/libtest.a test1.o test2.o asimmons-mac:test asimmons$ g++ testmain.cpp lib/libtest.a llvm-ld, "-o=".(98638.achacks.o = "98638.achacks.exe"), -disable-opt -internalize-public-api-list=_start,malloc,free,__adddi3,__anddi3,__ashldi3,__ashrdi3,__cmpdi2,__divdi3,__fixdfdi,__fixsfdi,__fixunsdfdi,__fixunssfdi,__floatdidf,__floatdisf,__floatunsdidf,__iordi3,__lshldi3,__lshrdi3,__moddi3,__muldi3,__negdi2,__one_cmpldi2,__qdivrem,__adddi3,__anddi3,__ashldi3,__ashrdi3,__cmpdi2,__divdi3,__qdivrem,__fixdfdi,__fixsfdi,__fixunsdfdi,__fixunssfdi,__floatdidf,__floatdisf,__floatunsdidf,__iordi3,__lshldi3,__lshrdi3,__moddi3,__muldi3,__negdi2,__one_cmpldi2,__subdi3,__ucmpdi2,__udivdi3,__umoddi3,__xordi3,__subdi3,__ucmpdi2,__udivdi3,__umoddi3,__xordi3,__error /Users/asimmons/Development/alchemy-darwin-v0.5a/avm2-libc/lib/avm2-libc.l.bc /Users/asimmons/Development/alchemy-darwin-v0.5a/avm2-libc/lib/avm2-libstdc++.l.bc, lib/test.l.bc 98638.achacks.o llvm-ld: error: Cannot find linker input 'lib/test.l.bc' asimmons-mac:test asimmons$ ls -l total 56 -rw-r--r-- 1 asimmons staff 552 Apr 9 17:46 98638.achacks.o drwxr-xr-x 3 asimmons staff 102 Apr 9 17:46 lib -rw-r--r-- 1 asimmons staff 27 Apr 9 17:45 test1.cpp -rwxr-xr-x 1 asimmons staff 536 Apr 9 17:46 test1.o -rw-r--r-- 1 asimmons staff 26 Apr 9 17:45 test2.cpp -rwxr-xr-x 1 asimmons staff 536 Apr 9 17:46 test2.o -rw-r--r-- 1 asimmons staff 26 Apr 9 17:45 testmain.cpp -rwxr-xr-x 1 asimmons staff 552 Apr 9 17:45 testmain.o asimmons-mac:test asimmons$ ls -l lib/ total 8 -rw------- 1 asimmons staff 1284 Apr 9 17:46 libtest.a asimmons-mac:test asimmons$ but the linker errors out because it can't find lib/test.l.bc. Notice how in the first example, 'test.l.bc' was generated alongside libtest.a. But in the second example test.l.bc was not generated. Where did it go? This is a contrived example, but in the project I'm trying to build with alchemy the make scripts generate libraries in full paths and then refer to them that way. It seems that alchemy's 'ar' tool is broken if you try to generate a library anywhere other than '.'. Has anyone else seen this? Is there a workaround? fyi, I've also posted this question on the Alchemy formus.

    Read the article

  • mythbuntu 12 - lirc device doesn't appear to even exist

    - by FrustratedWithFormsDesigner
    I'm trying to get a new installation of Mythbuntu working. So far, everything is OK except the remote. The sensor for the remote is on my Hauppauge WinTV HVR 1250. First I tried to run irw to see what was being picked up by the sensor: $ irw connect: No such file or directory Then trying to run lircd gives: $ lircd start$ lircd start lircd: can't open or create /var/run/lirc/lircd.pid I look for any lirc devices and find there are none: $ ls /dev/li* ls: cannot access /dev/li*: No such file or directory Just to be sure, I check in /proc/bus/input/devices, which shows me two powerbuttons (not sure why), kbd and mouse dev, and the audio devs. Nothing for the IR receiver on the tuner card (which I thought was strange because shouldn't the tuner show up here?). $ cat /proc/bus/input/devices I: Bus=0019 Vendor=0000 Product=0001 Version=0000 N: Name="Power Button" P: Phys=PNP0C0C/button/input0 S: Sysfs=/devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input0 U: Uniq= H: Handlers=kbd event0 B: PROP=0 B: EV=3 B: KEY=10000000000000 0 I: Bus=0019 Vendor=0000 Product=0001 Version=0000 N: Name="Power Button" P: Phys=LNXPWRBN/button/input0 S: Sysfs=/devices/LNXSYSTM:00/LNXPWRBN:00/input/input1 U: Uniq= H: Handlers=kbd event1 B: PROP=0 B: EV=3 B: KEY=10000000000000 0 I: Bus=0003 Vendor=099a Product=7202 Version=0111 N: Name="Wireless Keyboard/Mouse" P: Phys=usb-0000:00:10.1-2/input0 S: Sysfs=/devices/pci0000:00/0000:00:10.1/usb8/8-2/8-2:1.0/input/input2 U: Uniq= H: Handlers=sysrq kbd event2 B: PROP=0 B: EV=120013 B: KEY=1000000000007 ff9f207ac14057ff febeffdfffefffff fffffffffffffffe B: MSC=10 B: LED=7 I: Bus=0003 Vendor=099a Product=7202 Version=0111 N: Name="Wireless Keyboard/Mouse" P: Phys=usb-0000:00:10.1-2/input1 S: Sysfs=/devices/pci0000:00/0000:00:10.1/usb8/8-2/8-2:1.1/input/input3 U: Uniq= H: Handlers=kbd mouse0 event3 B: PROP=0 B: EV=1f B: KEY=4837fff072ff32d bf54444600000000 70001 20c100b17c000 267bfad9415fed 9e168000004400 10000002 B: REL=143 B: ABS=100000000 B: MSC=10 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Line" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input4 U: Uniq= H: Handlers=event4 B: PROP=0 B: EV=21 B: SW=2000 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Front Mic" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input5 U: Uniq= H: Handlers=event5 B: PROP=0 B: EV=21 B: SW=10 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Rear Mic" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input6 U: Uniq= H: Handlers=event6 B: PROP=0 B: EV=21 B: SW=10 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Front Headphone" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input7 U: Uniq= H: Handlers=event7 B: PROP=0 B: EV=21 B: SW=4 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Line-Out" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input8 U: Uniq= H: Handlers=event8 B: PROP=0 B: EV=21 B: SW=40 According to dmesg, the driver was registered, but it doesn't look like any devices was associated with the driver: $ dmesg | grep irc [ 10.631162] lirc_dev: IR Remote Control driver registered, major 249 So far, I've seen a number of forum pages suggesting that I use some trick to create a link between /dev/lirc and some other device that is the REAL IR sensor, like /dev/event5, but those cases assume that the real device is shown from /proc/bus/input/devices, and I don't see any such device there. Any suggestions on how to fix or further diagnose this?

    Read the article

  • Accessing home directory hangs

    - by Jeff
    Occasionally my laptop will hang when trying to access my home directory. The only fix so far is to reboot and then it goes away for a week. /var/log/kern.log has the following error: Nov 21 13:54:39 Laptop1 kernel: [231480.428107] INFO: task ls:10104 blocked for more than 120 seconds. Nov 21 13:54:39 Laptop1 kernel: [231480.428114] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Nov 21 13:54:39 Laptop1 kernel: [231480.428120] ls D f5dbf6a0 0 10104 9964 0x00000004 Nov 21 13:54:39 Laptop1 kernel: [231480.428130] f3edbd40 00000086 00000001 f5dbf6a0 00000000 00000001 c175dfe0 c1868ec0 Nov 21 13:54:39 Laptop1 kernel: [231480.428145] c1868ec0 054eadaf 0000d250 f5005ec0 ee940cc0 f3e5b300 f3edbcf8 c10e8bfa Nov 21 13:54:39 Laptop1 kernel: [231480.428159] f3edbd10 f3edbd10 f3edbd10 c102b505 fffba7e8 089fa000 f3edbd38 c10fb528 Nov 21 13:54:39 Laptop1 kernel: [231480.428173] Call Trace: Nov 21 13:54:39 Laptop1 kernel: [231480.428189] [<c10e8bfa>] ? lru_cache_add_lru+0x2a/0x50 Nov 21 13:54:39 Laptop1 kernel: [231480.428199] [<c102b505>] ? __kunmap_atomic+0x75/0xa0 Nov 21 13:54:39 Laptop1 kernel: [231480.428207] [<c10fb528>] ? do_anonymous_page+0x1f8/0x280 Nov 21 13:54:39 Laptop1 kernel: [231480.428218] [<c152b656>] __mutex_lock_slowpath+0xc6/0x120 Nov 21 13:54:39 Laptop1 kernel: [231480.428225] [<c152b304>] mutex_lock+0x24/0x40 Nov 21 13:54:39 Laptop1 kernel: [231480.428246] [<f83ab87c>] cifs_reconnect_tcon+0x13c/0x2a0 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428255] [<c152fa00>] ? vmalloc_fault+0xee/0xee Nov 21 13:54:39 Laptop1 kernel: [231480.428262] [<c152fc2f>] ? do_page_fault+0x22f/0x4a0 Nov 21 13:54:39 Laptop1 kernel: [231480.428276] [<f83abe3c>] smb_init+0x2c/0x90 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428285] [<c11aa42e>] ? ext4_htree_store_dirent+0x2e/0x120 Nov 21 13:54:39 Laptop1 kernel: [231480.428301] [<f83b0941>] CIFSSMBQPathInfo+0x41/0x210 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428319] [<f83c39e4>] ? cifs_get_inode_info+0x224/0x390 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428336] [<f83c3a21>] cifs_get_inode_info+0x261/0x390 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428354] [<f83bb35d>] ? build_path_from_dentry+0xcd/0x250 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428362] [<c102b69e>] ? kmap_atomic_prot+0xde/0x100 Nov 21 13:54:39 Laptop1 kernel: [231480.428370] [<c152c4cd>] ? _raw_spin_lock+0xd/0x10 Nov 21 13:54:39 Laptop1 kernel: [231480.428388] [<f83c6378>] ? _GetXid+0x58/0x80 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428405] [<f83c4f81>] cifs_revalidate_dentry_attr+0x111/0x1a0 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428423] [<f83c50e2>] cifs_getattr+0x52/0x120 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428431] [<c112c5b2>] vfs_getattr+0x42/0x70 Nov 21 13:54:39 Laptop1 kernel: [231480.428448] [<f83c5090>] ? cifs_revalidate_dentry+0x40/0x40 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428455] [<c112c647>] vfs_fstatat+0x67/0x80 Nov 21 13:54:39 Laptop1 kernel: [231480.428461] [<c112c680>] vfs_lstat+0x20/0x30 Nov 21 13:54:39 Laptop1 kernel: [231480.428468] [<c112c946>] sys_lstat64+0x16/0x30 Nov 21 13:54:39 Laptop1 kernel: [231480.428475] [<c11341ed>] ? link_path_walk+0x79d/0x8a0 Nov 21 13:54:39 Laptop1 kernel: [231480.428483] [<c152c8e4>] syscall_call+0x7/0xb

    Read the article

  • Rotating WebLogic Server logs to avoid large files using WLST.

    - by adejuanc
    By default, when WebLogic Server instances are started in development mode, the server automatically renames (rotates) its local server log file as SERVER_NAME.log.n.  For the remainder of the server session, log messages accumulate in SERVER_NAME.log until the file grows to a size of 500 kilobytes.Each time the server log file reaches this size, the server renames the log file and creates a new SERVER_NAME.log to store new messages. By default, the rotated log files are numbered in order of creation filenamennnnn, where filename is the name configured for the log file. You can configure a server instance to include a time and date stamp in the file name of rotated log files; for example, server-name-%yyyy%-%mm%-%dd%-%hh%-%mm%.log.By default, when server instances are started in production mode, the server rotates its server log file whenever the file grows to 5000 kilobytes in size. It does not rotate the local server log file when the server is started. For more information about changing the mode in which a server starts, see Change to production mode in the Administration Console Online Help.You can change these default settings for log file rotation. For example, you can change the file size at which the server rotates the log file or you can configure a server to rotate log files based on a time interval. You can also specify the maximum number of rotated files that can accumulate. After the number of log files reaches this number, subsequent file rotations delete the oldest log file and create a new log file with the latest suffix.  Note: WebLogic Server sets a threshold size limit of 500 MB before it forces a hard rotation to prevent excessive log file growth. To Rotate via WLST : #invoke WLSTC:\>java weblogic.WLST#connect WLST to an Administration Serverawls:/offline> connect('username','password')#navigate to the ServerRuntime MBean hierarchywls:/mydomain/serverConfig> serverRuntime()wls:/mydomain/serverRuntime>ls()#navigate to the server LogRuntimeMBeanwls:/mydomain/serverRuntime> cd('LogRuntime/myserver')wls:/mydomain/serverRuntime/LogRuntime/myserver> ls()-r-- Name myserver-r-- Type LogRuntime-r-x forceLogRotation java.lang.Void :#force the immediate rotation of the server log filewls:/mydomain/serverRuntime/LogRuntime/myserver> cmo.forceLogRotation()wls:/mydomain/serverRuntime/LogRuntime/myserver> The server immediately rotates the file and prints the following message: <Mar 2, 2012 3:23:01 PM EST> <Info> <Log Management> <BEA-170017> <The log file C:\diablodomain\servers\myserver\logs\myserver.log will be rotated. Reopen the log file if tailing has stopped. This can happen on some platforms like Windows.><Mar 2, 2012 3:23:01 PM EST> <Info> <Log Management> <BEA-170018> <The log file has been rotated to C:\diablodomain\servers\myserver\logs\myserver.log00001. Log messages will continue to be logged in C:\diablodomain\servers\myserver\logs\myserver.log.> To specify the Location of the archived Log Files The following command specifies the directory location for the archived log files using the -Dweblogic.log.LogFileRotationDir Java startup option: java -Dweblogic.log.LogFileRotationDir=c:\foo-Dweblogic.management.username=installadministrator-Dweblogic.management.password=installadministrator weblogic.Server For more information read the following documentation ; Using the WebLogic Scripting Tool http://download.oracle.com/docs/cd/E13222_01/wls/docs103/config_scripting/using_WLST.html Configuring WebLogic Logging Services http://download.oracle.com/docs/cd/E12840_01/wls/docs103/logging/config_logs.html

    Read the article

  • OS X, Mercurial and MediaTemple problem

    - by bschaeffer
    I've installed Mercurial per MT's knowledge base file here. Working with it server side using ssh from my Mac works fine. I can initialize repositories and the like, but pulling from the server or pushing from my Mac produces an error I don't understand. Here's what I get when call hg push from my local installation (hash marks represent my server number): remote: Traceback (most recent call last): remote: File "/home/#####/users/.home/data/mercurial-1.5/hg", line 27, in ? remote: mercurial.dispatch.run() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/dispatch.py", line 16, in run remote: sys.exit(dispatch(sys.argv[1:])) remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/dispatch.py", line 21, in dispatch remote: u = _ui.ui() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/ui.py", line 38, in __init__ remote: for f in util.rcpath(): remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/util.py", line 1200, in rcpath remote: _rcpath = os_rcpath() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/util.py", line 1174, in os_rcpath remote: path = system_rcpath() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/posix.py", line 41, in system_rcpath remote: path.extend(rcfiles(os.path.dirname(sys.argv[0]) + remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/posix.py", line 30, in rcfiles remote: rcs.extend([os.path.join(rcdir, f) remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/demandimport.py", line 75, in __getattribute__ remote: self._load() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/demandimport.py", line 47, in _load remote: mod = _origimport(head, globals, locals) remote: ImportError: No module named osutil abort: no suitable response from remote hg! Mercurial on my Mac is configured as follows [ui] username = John Smith editor = te -w remotecmd = ~/data/mercurial-1.5/hg My local single repo is configured as follows (hash marks represent my server number): [paths] default = ssh://mysite.com@s#####.gridserver.com/domains/mysite.com/html Mercurial on the server is configured with a just a username: [ui] username = John Smith The server .bash_profile is configured as follows (per the installation guide): # Aliases alias ls-a='ls -a -l' # Added this as suggested by the MediaTemple guide export PYTHONPATH=${HOME}/lib/python:$PYTHONPATH export PATH=${HOME}/bin:$PATH I understand this probably isn't a MediaTemple problem, but more likely an installation problem. I would really appreciate any assitance on this problem. Thanks in advance!

    Read the article

  • How to solve "java.io.IOException: error=12, Cannot allocate memory" calling Runtime#exec()?

    - by Andrea Francia
    On my system I can't run a simple Java application that start a process. I don't know how to solve. Could you give me some hints how to solve? The program is: [root@newton sisma-acquirer]# cat prova.java import java.io.IOException; public class prova { public static void main(String[] args) throws IOException { Runtime.getRuntime().exec("ls"); } } The result is: [root@newton sisma-acquirer]# javac prova.java && java -cp . prova Exception in thread "main" java.io.IOException: Cannot run program "ls": java.io.IOException: error=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:474) at java.lang.Runtime.exec(Runtime.java:610) at java.lang.Runtime.exec(Runtime.java:448) at java.lang.Runtime.exec(Runtime.java:345) at prova.main(prova.java:6) Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:164) at java.lang.ProcessImpl.start(ProcessImpl.java:81) at java.lang.ProcessBuilder.start(ProcessBuilder.java:467) ... 4 more Configuration of the system: [root@newton sisma-acquirer]# java -version java version "1.6.0_0" OpenJDK Runtime Environment (IcedTea6 1.5) (fedora-18.b16.fc10-i386) OpenJDK Client VM (build 14.0-b15, mixed mode) [root@newton sisma-acquirer]# cat /etc/fedora-release Fedora release 10 (Cambridge) EDIT: Solution This solves my problem, I don't know exactly why: echo 0 /proc/sys/vm/overcommit_memory Up-votes for who is able to explain :) Additional informations, top output: top - 13:35:38 up 40 min, 2 users, load average: 0.43, 0.19, 0.12 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 1.5%us, 0.5%sy, 0.0%ni, 94.8%id, 3.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1033456k total, 587672k used, 445784k free, 51672k buffers Swap: 2031608k total, 0k used, 2031608k free, 188108k cached Additional informations, free output: [root@newton sisma-acquirer]# free total used free shared buffers cached Mem: 1033456 588548 444908 0 51704 188292 -/+ buffers/cache: 348552 684904 Swap: 2031608 0 2031608

    Read the article

  • Find does not work in Expect Send command

    - by Sharjeel Sayed
    I run this bash command to display contents of somefile.cf in a Weblogic domain directory. find $(/usr/ucb/ps auwwx | grep weblogic | tr ' ' '\n' | grep security.policy | grep domain | awk -F'=' '{print $2}' | sed -e 's/weblogic.policy//' -e 's/security\///' -e 's/dep\///' | awk -F'/' '{print "/"$2"/"$3"/"$4"/somefile.cf"}' | sort | uniq) 2> /dev/null -exec ls {} \; -exec cat {} \; I tried incorporating this in an expect script and also escaped some special characters which would throw an error in expect but its still not working. send "echo ; echo 'Weblogic somefile.cf:' ; find \$(/usr/ucb/ps auwwx | grep weblogic | tr ' ' '\n' | grep security.policy | grep domain | awk -F'=' '{print \$2}' | sed -e 's/weblogic.policy//' -e 's/security\\///' -e 's/dep\\///' | awk -F'/' '{print "/"\$2"/"\$3"/"\$4"/somefile.cf"}' | sort | uniq) 2> /dev/null -exec ls {} \\; -exec cat {} \\; ; echo\r" I guess it needs some more escaping of special characters or probably I dint escape the existing ones correctly. Any help would be appreciated.

    Read the article

  • Django Admin Page missing CSS

    - by super9
    I saw this question and recommendation from Django Projects here but still can't get this to work. My Django Admin pages are not displaying the CSS at all. This is my current configuration. settings.py ADMIN_MEDIA_PREFIX = '/media/admin/' httpd.conf <VirtualHost *:80> DocumentRoot /home/django/sgel ServerName ec2-***-**-***-***.ap-**********-1.compute.amazonaws.com ErrorLog /home/django/sgel/logs/apache_error.log CustomLog /home/django/sgel/logs/apache_access.log combined WSGIScriptAlias / /home/django/sgel/apache/django.wsgi <Directory /home/django/sgel/media> Order deny,allow Allow from all </Directory> <Directory /home/django/sgel/apache> Order deny,allow Allow from all </Directory> LogLevel warn Alias /media/ /home/django/sgel/media/ </VirtualHost> <VirtualHost *:80> ServerName sgel.com Redirect permanent / http://www.sgel.com/ </VirtualHost> In addition, I also ran the following to create (I think) the symbolic link ln -s /home/djangotest/sgel/media/admin/ /usr/lib/python2.6/site-packages/django/contrib/admin/media/ UPDATE In my httpd.conf file, User django Group django When I run ls -l in my /media directory drwxr-xr-x 2 root root 4096 Apr 4 11:03 admin -rw-r--r-- 1 root root 9 Apr 8 09:02 test.txt Should that root user be django instead? UPDATE 2 When I enter ls -la in my /media/admin folder total 12 drwxr-xr-x 2 root root 4096 Apr 13 03:33 . drwxr-xr-x 3 root root 4096 Apr 8 09:02 .. lrwxrwxrwx 1 root root 60 Apr 13 03:33 media -> /usr/lib/python2.6/site-packages/django/contrib/admin/media/ The thing is, when I navigate to /usr/lib/python2.6/site-packages/django/contrib/admin/media/, the folder was empty. So I copied the CSS, IMG and JS folders from my Django installation into /usr/lib/python2.6/site-packages/django/contrib/admin/media/ and it still didn't work

    Read the article

  • Perl Lingua giving weird error on install

    - by user299306
    I am trying to install perl Lingua onto a unix system (ubuntu, latest version). Of course I am root. when I go into the package to install using 'perl Makefile.pl' I get this dumb error: [root@csisl27 Lingua-Lid-0.01]# perl Makefile.PL /opt/ls//lib does not exist at Makefile.PL line 48. I have tried playing with the path on line 48, nothing changes, here is what line 48-50 looks like: Line 48: die "$BASE/lib does not exist" unless -d "$BASE/lib"; Line 49: die "$BASE/include does not exist" unless -d "$BASE/include"; Line 50: die "lid.h is missing in $BASE/include" unless -e "$BASE/includ/lid.h"; The variable $BASE is declared as this: $BASE = "/opt/ls/" if ($^O eq "linux" or $^O eq "solaris"); $BASE = "/usr/local/" if ($^O eq "freebsd"); $BASE = $ENV{LID_BASE_DIR} if (defined $ENV{LID_BASE_DIR}); Now the perl program I am trying to write simply look like this (just my base): #!/usr/bin/perl use Lingua::LinkParser; use strict; print "Hello world!\n"; When I run this trying to use Lingua, here is my error: [root@csisl27 assign4]# ./perl_parser_1.pl Can't locate Lingua/LinkParser.pm in @INC (@INC contains: /usr/lib/perl5/site_perl/5.10.0/x86_64-linux-thread-multi /usr/lib/perl5/site_perl/5.10.0 /usr/lib/perl5/vendor_perl/5.10.0/x86_64-linux-thread-multi /usr/lib/perl5/vendor_perl/5.10.0 /usr/lib/perl5/5.10.0/x86_64-linux-thread-multi /usr/lib/perl5/5.10.0 /usr/lib/perl5/site_perl /usr/lib/perl5/vendor_perl .) at ./perl_parser_1.pl line 3. BEGIN failed--compilation aborted at ./perl_parser_1.pl line 3. Tried insalling this from cpan, still doesn't properly work.

    Read the article

  • Forking with Pipes

    - by Luke
    Hello I have tried to do fork() and piping in main and it works perfectly fine but when I try to implement it in a function for some reason I don't get any output, this is my code: void cmd(int **pipefd,int count,int type, int last); int main(int argc, char *argv[]) { int pipefd[3][2]; int i, total_cmds = 3,count = 0; int in = 1; for(i = 0; i < total_cmds;i++){ pipe(pipefd[count++]); cmd(pipefd,count,i,0); } /*Last Command*/ cmd(pipefd,count,i,1); exit(EXIT_SUCCESS); } void cmd(int **pipefd,int count,int type, int last){ int child_pid,i,i2; if ((child_pid = fork()) == 0) { if(count == 1){ dup2(pipefd[count-1][1],1); /*first command*/ } else if(last!=0){ dup2(pipefd[count - 2][0],0); /*middle commands*/ dup2(pipefd[count - 1][1],1); } else if(last == 1){ dup2(pipefd[count - 1][0],0); /*last command*/ } for(i = 0; i < count;i++){/*close pipes*/ for(i2 = 0; i2 < 2;i2++){ close(pipefd[i][i2]); }} if(type == 0){ execlp("ls","ls","-al",NULL); } else if(type == 1){ execlp("grep","grep",".bak",NULL); } else if(type==2){ execl("/usr/bin/wc","wc",NULL); } else if(type ==3){ execl("/usr/bin/wc","wc","-l",NULL); } perror("exec"); exit(EXIT_FAILURE); } else if (child_pid < 0) { perror("fork"); exit(EXIT_FAILURE); } } I checked the file descriptors and it is opening the right ones, not sure what the problem could be..

    Read the article

  • Git-svn branch hoses dcommit when using an odd branch structure

    - by Chuck Vose
    I had a boss, past-tense, who decided to put svn branches in the same folder as trunk. Normally, this wouldn't affect me that much but since I'm using git-svn things are going so well. After I did a fetch it created a folder for each branch in my root folder so I have three folders, drupal, trunk, and client. The drupal folder is git's master branch, client and trunk are the svn branches. Merging and committing works great, in fact everything git related is working superb. However dcommit is totally hosed, it's trying to commit a folder called client and one called trunk. I can't even imagine what havoc this would cause for svn later on. So my question is, what have I done wrong in my .git/config and is there anything I can do to fix this or am I going to have to suffer and go back to using svn? Please don't make me go back. I don't think I can take it anymore. Bastard boss knows how to leave a legacy. [svn-remote "svn"] url = https://svn.mydomain.com/svn/project_name fetch = trunk:refs/remotes/trunk branches = *:refs/remotes/* tags = tags/*:refs/remotes/tags/* Normally the branches line would look like this (when using --stdlayout): branches = branches/*:refs/remotes/branches/* ls output is thus: $ ls client/ docs/ drupal/ sql/ trunk/

    Read the article

  • ruby gem not found although it is installed

    - by Eimantas
    I found some similar problems here on SO, but none seem to match my case (sorry if I overlooked). Here's my problem: I installed oauth-plugin gem to ruby gems dir, but trying to use it in rails app tells me that it's not being found. Here's the output of relevant commands: Instalation % s gem install oauth-plugin Successfully installed oauth-plugin-0.3.14 1 gem installed Installing ri documentation for oauth-plugin-0.3.14... Installing RDoc documentation for oauth-plugin-0.3.14... gem which oauth-plugin output: % gem which oauth-plugin /usr/lib/ruby/gems/1.8/gems/oauth-plugin-0.3.14/lib/oauth-plugin.rb gem env output: % gem env RubyGems Environment: - RUBYGEMS VERSION: 1.3.6 - RUBY VERSION: 1.8.7 (2009-12-24 patchlevel 248) [i686-darwin10.2.0] - INSTALLATION DIRECTORY: /usr/lib/ruby/gems/1.8 - RUBY EXECUTABLE: /usr/bin/ruby - EXECUTABLE DIRECTORY: /usr/bin - RUBYGEMS PLATFORMS: - ruby - x86-darwin-10 - GEM PATHS: - /usr/lib/ruby/gems/1.8 - /Users/eimantas/.gem/ruby/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => true - :bulk_threshold => 1000 - :gem => ["--no-ri", "--no-rdoc"] - :sources => ["http://gems.ruby.lt/", "http://rubygems.org/"] - REMOTE SOURCES: - http://gems.ruby.lt/ - http://rubygems.org/ Doing ls -l /usr/lib/ruby shows this: % ls -l /usr/lib/ruby lrwxr-xr-x 1 root wheel 76 Aug 14 2009 /usr/lib/ruby -> ../../System/Library/Frameworks/Ruby.framework/Versions/Current/usr/lib/ruby And the gem in question is in intended location. This is not a single gem that is not being found by rubygems (although it's located where it should be). Any guidance towards the solution is much appreciated.

    Read the article

  • Can't get SFTP to work in PHP

    - by Chris Kloberdanz
    I am writing a simple SFTP client in PHP because we have the need to programatically retrieve files via n remote servers. I am using the PECL SSH2 extension. I have run up against a road block, though. The documentation on php.net suggests that you can do this: $stream = fopen("ssh2.sftp://$sftp/path/to/file", 'r'); However, I have an ls method that attempts to something similar public function ls($dir) { $rd = "ssh2.sftp://{$this->sftp}/$dir"; $handle = opendir($rd); if (!is_resource($handle)) { throw new SFTPException("Could not open directory."); } while (false !== ($file = readdir($handle))) { if (substr($file, 0, 1) != '.'){ print $file . "\n"; } } closedir($handle); } I get the following error: PHP Warning: opendir(): Unable to open ssh2.sftp://Resource id #5/outgoing on remote host This makes perfect sense because that's what happens when you cast a resource to string. Is the documentation wrong? I tried replacing the resource with host, username, and host and that didn't work either. I know the path is correct because I can run SFTP from the command line and it works fine. Has anyone else tried to use the SSH2 extenstion with SFTP? Am I missing something obvious here? UPDATE: I setup sftp on another machine in-house and it works just fine. So, there must be something about the server I am trying to connect to that isn't working.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >