Search Results

Search found 1603 results on 65 pages for 'ls'.

Page 38/65 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • How do you determine the OWNER of an Oracle Database

    - by Kwang Mark Eleven
    When you install an Oracle database in a Unix server, the Unix user id you use for the installation becomes the OWNER of the database. What is the most reliable and general way of determining in a shell script which Unix user is the owner of an Oracle installation? I mean, can you perform a grep on a file created by the installation to find this information or shall I resort to use the ls command on a specific file on a specific directory. If the name of the file to be checked is also variable, I would need to have a way of determining the name and path to the file. Thanks in advance for your time

    Read the article

  • List/remove files, with filenames containing string that's "more than a month ago"?

    - by Martin Tóth
    I store some data in files which follow this naming convention: /interesting/data/filename-YYYY-MM-DD-HH-MM How do I look for the ones with date in file name < now - 1 month and delete them? Files may have changed since they were created, so searching according to last modification date is not good. What I'm doing now, is filter-ing them in python: prefix = '/interesting/data/filename-' import commands names = commands.getoutput('ls {0}*'.format(prefix)).splitlines() from datetime import datetime, timedelta all_files = map(lambda name: { 'name': name, 'date': datetime.strptime(name, '{0}%Y-%m-%d-%H-%M'.format(prefix)) }, names) month = datetime.now() - timedelta(days = 30) to_delete = filter(lambda item: item['date'] < month, all_files) import os map(os.remove, to_delete) Is there a (oneliner) bash solution for this?

    Read the article

  • Can't create a file even if rights allow and I've relogged in

    - by stiv
    I try to create file in folder with group write access, user tomcat7 is in group. Why isn't it workin? skr@konrad~/data/asu$ sudo -u tomcat7 sh $ whoami tomcat7 $ echo > /home/skr/data/asu/g.gz.index sh: 2: cannot create /home/skr/data/asu/g.gz.index: Permission denied $ ls -la /home/skr/data/asu/ total 18708 drwxrwxr-x 2 skr skr 4096 Sep 29 08:38 . drwxrwxr-x 85 skr skr 4096 Jul 30 00:42 .. $ grep ^skr /etc/group skr:x:1002:tomcat7:mail Tried to logout, but it doesn't help. Any ideas?

    Read the article

  • Linux file permissions seem right but I can't write to a directory

    - by CaseyB
    I believe that I have the permissions set correctly but I can't write to a directory. Here's my problem: cborders@Kraken:/var/www$ ls -la total 12 drwxrwxr-x 2 webz webz 4096 2011-12-30 14:58 ./ drwxr-xr-x 13 root root 4096 2011-12-30 14:58 ../ -rw-rw-r-- 1 webz webz 177 2011-12-30 14:58 index.html cborders@Kraken:/var/www$ id cborders uid=1000(cborders) gid=1000(cborders) groups=1000(cborders),4(adm),20(dialout),24(cdrom),46(plugdev),109(sambashare),113(lpadmin),114(admin),1002(webz) cborders@Kraken:/var/www$ mkdir test mkdir: cannot create directory `test': Permission denied The owner of the directory is a user called webz and the permissions allow the user and group rwx access to it. I am in the webz group but I still can't make any changes. What am I doing wrong here?

    Read the article

  • Change permission to /proc/net/ip_conntrack on Ubuntu server 9.10

    - by bjarkef
    Hi I have a script that needs to extract certain information form the /proc/net/ip_conntrack file once in a while. I do not wish to run this script as the root user. Default permissions for the file is: $ ls -lah /proc/net/ip_conntrack -r--r----- 1 root root 0 2010-03-28 12:18 /proc/net/ip_conntrack I can change it with: sudo chmod o+r /proc/net/ip_conntrack But that does not stick after a reboot. Is there some configuration file for file-permissions in the /proc directory in Ubuntu Server 9.10? Or do I just have to stick a chmod line in some startup script?

    Read the article

  • This weird behaviour from cronjob

    - by The DOCTOR from TARDIS
    I have set the crontab like this: */5 0 * * * /www/permitChat.sh and the /www/permitChat.sh is this: # We are setting the name of file # in the variable along with complete path. sFilePath=`date +\/www\/ChatLogs\/%Y\/%m/%d_%m_%Y.txt` # First we set its permissions to # readable by all users, and then # modify them to be writable by only root. chmod a=r $sFilePath chmod u+w $sFilePath ls -lh $sFilePath The trouble I am facing is, the cron gets executed after 12:00 PM everyday, instead of executing at 12:00 AM to 01:00 AM, every 5 minutes. What could be wrong? All my system variables appear to be synced.

    Read the article

  • Using Cygwin in Windows 8, chmod 600 does not work as expected?

    - by Castaa
    I'm trying to change the the permissions to my key file key.pem in Cygwin 1.7.11. It has the permissions flags: -rw-rw---- chmod -c 600 key.pem Reports: mode of 'key.pem' changed from 0660 (rw-rw----) to 0600 (rw-------) However: ls -l key.pem still reports key.pem's permission flags are still: -rw-rw---- This reason why I'm asking is that ssh is complaining: Permissions 0660 for 'key.pem' are too open. when I try to ssh into my Amazon EC2 instance. Is this an issue with Cygwin & Windows 8 NTFS or am I missing something?

    Read the article

  • GRUB2 UEFI booting from LVM on RAID (with XEN)

    - by pavian
    I'm experimenting with booting root fs from LVM volume inside the raid (mdraid superblock 1.x) via UEFI with GRUB2. Also I'm using Xen hypervisor. From grub command line I can see my lvm volume (ls command) but I got kernel panic due to "unable to mount root fs". I saw a note in this article telling it's probably impossible to boot root fs from raid via UEFI, but I don't understand the reason why not. Is it possible to boot linux with this configuration without the initramfs (which I don't won't to use)?

    Read the article

  • New users' directories owned by root

    - by dotancohen
    On a CentOS server running Plesk, new users are added for each new domain. The users' home directories are in /var/www/vhosts/. New users' home directories are owned by root, and need to have an admin with root access come in and chown them: dotan@sh2:~$ echo $HOME /var/www/vhosts/someDomain.com dotan@sh2:~$ pwd /var/www/vhosts/someDomain.com dotan@sh2:~$ touch testFile touch: cannot touch `testFile': Permission denied dotan@sh2:~$ ls -la ../ | grep someDomain drwxr-xr-x 13 root root 4096 2012-08-07 19:47 someDomain.com dotan@sh2:~$ whoami dotan dotan@sh2:~$ chown dotan /var/www/vhosts/someDomain.com chown: changing ownership of `/var/www/vhosts/someDomain.com': Operation not permitted dotan@sh2:~$ Why might the new users' directories be owned by root, and how might we fix this? Thanks.

    Read the article

  • Disable NSS LDAP IPv6 (AAAA) lookups

    - by pilcrow
    Question: How can I disable inet6 AAAA queries for my LDAP server during (LDAP-backed) NSS lookups on a CentOS (RHEL) 5 machine? Background: I've servers configured to consult ldap://ldap.internal for NSS passwd and group lookups. Every relevant NSS lookup, for example the getpwuid(3) implied by an ls -l which needs to translate UIDs to network user names, performs the following DNS dance before connecting to the ldap server: AAAA? ldap.internal -> (no records) AAAA? ldap.internal.internal -> NXDomain A? ldap.internal -> 192.168.3.89 I'd like to skip the first two queries completely. Configuration: [server]$ cat /etc/redhat-release CentOS release 5.4 (Final) [server]$ grep ^passwd /etc/nsswitch.conf passwd: files ldap [server]$ grep ^uri /etc/ldap.conf uri ldap://ldap.internal/ For what it's worth, IPv6 support is otherwise disabled on these systems: [server]$ grep off /etc/modprobe.conf alias ipv6 off alias net-pf-10 off [server]$ echo "$(ip a | grep -c inet6) IPv6-enabled interfaces" 0 IPv6-enabled interfaces

    Read the article

  • Logfiles filling with iptables logging

    - by Peter I
    OS: Debian 6 Server Version I have different logfiles which are filling up: user@server:/var/log$ ls -lahS | head total 427G -rw-r--r-- 1 root root 267G Nov 2 17:29 bandwidth -rw-r----- 1 root adm 44G Nov 2 17:29 kern.log -rw-r----- 1 root adm 27G Nov 2 17:29 debug -rw-r----- 1 root adm 23G Oct 27 06:33 kern.log.1 -rw-r----- 1 root adm 17G Nov 2 17:29 messages -rw-r----- 1 root adm 14G Oct 27 06:33 debug.1 -rw-r----- 1 root adm 12G Nov 2 17:29 syslog -rw-r----- 1 root adm 12G Nov 1 06:26 syslog.1 -rw-r----- 1 root adm 9.0G Oct 27 06:33 messages.1 So I looked up the file /etc/iptables.up.rules which had those lines in it: -A FORWARD -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A FORWARD -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A OUTPUT -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A INPUT -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: So deleting those lines will solve my problem. But how would I edit those lines without losing their functionality?

    Read the article

  • Alternatives to Crashplan for VPS?

    - by Chloe
    I use SFTP Net Drive to mount a remote VPS so I can back it up. However, it's taken over 3+ days to scan! I ran 'ls -lR' from my desktop over the mounted network drive and it only took about 5m to list all the files! There are only about 5000 files and 2 GB. I know Crashplan can run headless on the VPS itself, but that sounds like a pain to set up, and it takes so much memory on the server. The VPS doesn't have a lot of memory to spare - it's less than my desktop. Is there another program that can communicate with a Crashplan backup protocol and has a command line interface? backup /home

    Read the article

  • configuring cgi-bin using .htaccess

    - by Alexandru
    I'm trying to configure a directory as cgi-bin using .htaccess, but when I try to access the executables, the files are downloaded. I'm using apache2.2. What is the problem? My .htaccess looks like: # cat www/cgi-bin/.htaccess Options +ExecCGI AddHandler cgi-script cgi pl File permissions are # ls -1la www/cgi-bin/ total 60 drwxr-xr-x 2 root root 4096 iun 10 19:22 . drwxr-xr-x 5 root root 4096 iun 10 19:18 .. -rw-r--r-- 1 root root 46 iun 10 19:23 .htaccess -rwxr-xr-x 1 root root 15358 iun 10 19:23 paperload.cgi -rwxr-xr-x 1 root root 12728 iun 10 19:23 papers.cgi -rwxr-xr-x 1 root root 12593 iun 10 19:23 paperview.cgi

    Read the article

  • Reenabling the Spotlight Menubar item in Mac OS X 10.6

    - by Tim Visher
    I believe I followed the instructions here to disable Spotlight indexing and remove the menubar item. I reenabled indexing just fine, but when I changed the permissions back to 744, the spotlight search position came back (as in the space it would normally occupy), but the actual icon and search box will not show up. If I click that portion of the screen I get a blue box, but I can't type anything in to anything. Currently, permissions look like this: [~]$ ll /System/Library/CoreServices/Search.bundle.bak/Contents/MacOS/ total 648 -rwxr-xr-x 1 root wheel 835K Sep 17 14:48 Search* ll is an alias mapped to the following alias ll='${LS_PREAMBLE} -hl' with $LS_PREAMBLE [~]$ echo $LS_PREAMBLE ls -GF (Ignore the .bak extension. I decided that until I found a way to fully restore it, I would just remove it entirely following the directions here) That looks right to me and obviously something is launching, but the UI elements aren't there. So how can I restore it? Thanks in advance!

    Read the article

  • Use msysgit/"Git for Windows" to navigate Windows shortcuts?

    - by Darthfett
    I use msysgit on Windows to use git, but I often want to navigate through a Windows-style *.lnk shortcut. I typically manage my file structure through Windows' explorer, so using a different type of shortcut (such as creating hard or soft link in git) isn't feasible. How would I navigate through this type of shortcut? For example: PCUser@PCName ~ $ cd Desktop PCUser@PCName ~/Desktop $ ls Scripts.lnk PCUser@PCName ~/Desktop $ cd Scripts.lnk sh.exe": cd: Scripts.lnk: Not a directory Is it possible to change this behavior, so that instead of getting an error, it just goes to the location of the directory? Alternatively, is there a command to get the path in a *.lnk file?

    Read the article

  • Linux - Multiple service statuses with one command

    - by Jimbo
    I'm trying to retrieve a list of multiple service statuses in Unix. I'm using the service command: man page. The statuses all start with the transmission-daemon string, for example. I require the ability to list multiple services' statuses, with a single command. Here is what I'm currently trying (and failing) with: Here I'm trying to grab a list of statuses using grep. service $(ls /etc/init.d | grep "transmission-daemon") status Here I'm trying to list all statuses, and then grep for them. service --status-all | grep "transmission-daemon" This produces the following, which isn't much help: How can I effectively achieve what I require with a single command, so that I can then continue piping to awk for further customisation? Desired example output: transmission-daemon started transmission-daemon2 stopped transmission-daemon3 started

    Read the article

  • One samba directory is very slow

    - by Tim Rosser
    We have a RHEL server running as a samba server for our Windows network which has been running fine for ages. All of a sudden this morning one specific folder became really slow and sometimes inaccessible. It's the /home/ directory (containing all of the user specific stuff, like their windows desktops, documents etc). It's not only really slow over the network, but when I try to use ls to view the directory it just hangs. I'm getting loads of messages like the following in /var/log/messages Mar 20 09:53:32 zeus smbd[32378]: [2012/03/20 09:53:32, 0] smbd/service.c:set_current_service(184) Mar 20 09:53:32 zeus smbd[32378]: chdir (/opt/shares/home/tim.rosser) failed

    Read the article

  • linux + automated rsync command

    - by Diana
    my target is to copy /tmp/my_file from 10.10.10.1 to my Linux machine without login and password , I set the passwords file with the right password - secret123 so rsync should work , please advice why I get Permission denied. Remark - 10.10.10.1 address is linux machine version – red hat 5.3 rsync -WavH --password-file=/tmp/passwords --progress [email protected]:/tmp/my_file . Permission denied. rsync: connection unexpectedly closed (0 bytes read so far) rsync error: error in rsync protocol data stream (code 12) at io.c(165) more /tmp/passwords secret123 ls -ltr passwords -rwxr-xr-x 1 root root 10 Sep 12 17:32 passwords

    Read the article

  • Email is not sending when the script is running by CRON

    - by Adam Blok
    I wrote the simple backup bash script and at the end of it, it's sending an email to me that backup is ready. Everything works perfect when I run this script from terminal (root), but when the script is running by CRON, email is not sending :-/. #!/bin/sh filename=$(date +%d-%m-%Y) backup_dir="/mnt/backup/" email_from_name="BACKUP" email_to="my@email" email_subject="Backup is ready" email_body_file="/tmp/backup-email-body.txt" tar czf "$backup_dir$filename.tgz" "/home/www" echo "Subject: $email_subject" > $email_body_file ls $backup_dir -sh >> $email_body_file sendmail -F $email_from_name -t $email_to < $email_body_file

    Read the article

  • Linux: Can I link multiple destinations via softlinks?

    - by kds1398
    Attempting to end up with something similar to this: $ ls -l lrwxrwxrwx 1 user group 4 Jun 28 2010 foo -> /home/bar lrwxrwxrwx 1 user group 4 Jun 29 2010 foo -> /etc/bar The intention is to be able to move a file to foo & have it go to both destination directories for now. The goal is to eventually unlink /home/bar link after confirming there are no issues with moving the files to /etc/bar. I am restricted in that I am unable to change or add to the process that moves the files.

    Read the article

  • Deny directory browsing in a Proftpd / Ubuntu Installation

    - by skylarking
    I used this guide to set up a Proftpd installation an Ubuntu 8.04 server... Works well, but the generic user ( userftp ) can run ls and is able to change to any Directory and browse freely on the server ..from the root / and upwards.. I added this line to etc/shells /bin/false in hopes that that would prevent this ... I really only want the userftp account to be able to upload to the generic /home/FTP-Shared directory, and be able to do nothing else on the server. How is this accomplished ... This is a headless Ubuntu box..and I am using CLI only .. no GUI admin tools

    Read the article

  • When can an FTP server close its passive connections?

    - by Don Kirkby
    Does the FTP protocol allow the server to close any of its passive connections while the client is still connected? Can it tell when the client is finished receiving and then close the connection? I'm including an FTP server in my application using the pyftpdlib Python project. I've got it to work in active and passive mode, but I'm a bit concerned about when it closes its passive connections. I've tried connecting to it with both FileZilla and the default ftp command in Ubuntu, and in both cases, I get a new passive port for every request. That is, if I sit in the root folder and type ls 10 times, I use up 10 ports. This means that I have to allocate a big block of passive ports for the FTP server to use so it won't run out. As soon as the client disconnects, the server releases all the passive connections associated with that client and those ports can be reused. However, a long-running connection could use up a lot of ports.

    Read the article

  • Xterm is not completely erasing field lines

    - by user26367
    We have a SSH tunnel to a remote unix box from Windows clients using Cygwin. It launches a terminal program from the unix box locally on the Windows box for data input. The xterm window is launched as follows xterm -fn 10x20 -bg DodgerBlue4 -fg white -cr white -ls -geometry 90x30 -e program When a screen goes from read only mode to edit mode, the edit fields have ____. When going back to read only mode, a single pixel artifact is left behind for each field. *readonly* User: *edit* User: ___________ *after edit exit* User: . <- this dot is left behind Any idea what we need to change to fix this?

    Read the article

  • Why standard, virtual host Drupal 7 config causes 403 (Forbidden) in Apache2?

    - by drupality
    Virtual host declaration causing the problem (source): <VirtualHost *:80> ServerAdmin admin@d7 DocumentRoot /vagrant/d7 ServerName www.d7.local ServerAlias d7.local RewriteEngine On RewriteOptions inherit <Directory /vagrant/d7> Order allow,deny Allow from all </Directory> <Directory /vagrant> Order allow,deny Allow from all </Directory> </VirtualHost> error logs: [Mon Nov 04 12:23:11.947082 2013] [authz_core:error] [pid 2471] [client 10.0.2.2:58238] AH01630: client denied by server configuration: /vagrant/d7/ I have no idea why this isn't work... With above rule I have forbidden on drupal site and apache welcome page too (index.html) ls -ld /vagrant/d7 command output: drwxrwxrwx 1 vagrant vagrant 8192 Nov 4 10:05 /vagrant/d7

    Read the article

  • Lost Page Write I/O Errors on CentOS LVM setup

    - by Gregg Leventhal
    I have a CentOS 6 box with LVM setup and one of the PVs is a USB disk (I know). One of them is getting the error: Oct 30 10:57:07 alpha01 kernel: lost page write due to I/O error on dm-3 Oct 30 10:57:07 alpha01 kernel: Buffer I/O error on device dm-3, logical block 4 Which is causing problems with all of the LVs on it. pvs shows the PV as unknown device. I can ls to the logical volumes and they show up in lvdisplay, but first I get a bunch of IO errors. I made sure the cables are secure between the USB drive. What should I do to get this back up and running for the meanwhile? Should I unmount each LV and run an fsck.ext4 on each one like fsck.ext4 -y /dev/vg1/lv_logvolname ?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >