Search Results

Search found 4783 results on 192 pages for 'bash completion'.

Page 166/192 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Joining an Ubuntu 14.04 machine to active directory with realm and sssd

    - by tubaguy50035
    I've tried following this guide to set up realmd and sssd with active directory: http://funwithlinux.net/2014/04/join-ubuntu-14-04-to-active-directory-domain-using-realmd/ When I run the command realm –verbose join domain.company.com –user-principal=c-u14-dev1/[email protected] –unattended everything seems to connect. My sssd.conf looks like the following: [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 [sssd] domains = DOMAIN.COMPANY.COM config_file_version = 2 services = nss, pam [domain/DOMAIN.COMPANY.COM] ad_domain = DOMAIN.COMPANY.COM krb5_realm = DOMAIN.COMPANY.COM realmd_tags = manages-system joined-with-adcli cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = True fallback_homedir = /home/%d/%u access_provider = ad My /etc/pam.d/common-auth looks like this: auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_sss.so use_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_cap.so However, when I try to SSH into the machine with my active directory user, I see the following in auth.log: Aug 21 10:35:59 c-u14-dev1 sshd[11285]: Invalid user nwalke from myip Aug 21 10:35:59 c-u14-dev1 sshd[11285]: input_userauth_request: invalid user nwalke [preauth] Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_krb5(sshd:auth): authentication failure; logname=nwalke uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): check pass; user unknown Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname user=nwalke Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): received for user nwalke: 10 (User not known to the underlying authentication module) Aug 21 10:36:12 c-u14-dev1 sshd[11285]: Failed password for invalid user nwalke from myip port 34455 ssh2 What do I need to do to allow active directory users the ability to log in?

    Read the article

  • Monit - script not being executed as expected

    - by Thaís
    I'm working on a Windows 7 machine, having a VM for Ubuntu (image disk: 12.04-desktop-i386.iso). On the VM I installed Monit 5.3.2, and configured some processes and applications. So I created a script to run my application. This application should display some content on the screen (Im basically displaying two images, using Feh). The thing is: if I call my script through the command line, it runs ok, and display the images. But if I run through monit, it seems to be running ok, but it doesn't display the images.In the case I try to debug it (remote debug), then I can see the images. So I was supposing this could be some kind of configuration, but didn't find out what (even using the option -I wou'ldn't work). I"m showing below more details: -Piece of script on Monit---- check program runMediaHandler with path "/usr/bin/runMediaHandler.sh" if status == 1 then alert -runMediaHandler.sh ---- #!/bin/bash java -jar /home/thais/Desktop/MediaHandler_RC2.jar Summarizing: 1.What works: if I run java directly: java -jar /home/thais/Desktop/MediaHandler_RC2.jar if I run the script directly: runMediaHandler.sh if I remote debug putting a breakpoint where the image should be displayed 2.What does not work: putting that piece of information on Monit to "check program", writen above (even if calling monit -I start runMediaHandler) Thank you in advance, Thaís

    Read the article

  • no disk io but iowait very high

    - by Dan
    there is no disk io going results of iotop Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO< COMMAND 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init [3] 1930 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1931 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1932 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1933 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1810 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sh /usr/b~user=mysql 9795 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 8004 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 3226 be/4 postfix 0.00 B/s 0.00 B/s 0.00 % 0.00 % tlsmgr -l -t unix -u 8154 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 9759 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % find -name php.ini 9249 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 1756 be/4 postfix 0.00 B/s 0.00 B/s 0.00 % 0.00 % psa-pc-re~@localhost 1863 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 0.00 % mysqld --~mysql.sock 3123 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % crond 1758 be/4 postfix 0.00 B/s 0.00 B/s 0.00 % 0.00 % psa-pc-re~@localhost 1865 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 0.00 % mysqld --~mysql.sock 1592 be/4 sw-cp-se 0.00 B/s 0.00 B/s 0.00 % 0.00 % sw-cp-ser~ver/config 7612 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sshd: root@pts/0 7614 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sftp-server 7615 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % -bash 1602 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sshd 8003 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd but iowait very high ? iostat report avg-cpu: %user %nice %system %iowait %steal %idle 0.83 0.00 0.13 13.83 0.00 85.20 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn server runs like a snail what could be wrong here ? thanks

    Read the article

  • Which Ubuntu-like Linux OSs work well on a flash drive?

    - by Evan Kroske
    I want a Linux OS that I can load on a flash drive, but I don't want to relearn an entire operating system. I want to know which tiny Linux installations are most like Ubuntu. For example, I'd like to use the apt-get package manager, the Gedit text editor, and the bash shell. I'd like to use something that's already popular, stable, and highly compatible, but it needs to fit comfortably in one gig of my four-gig flash drive (just the essentials; I'll use the remaining three gigs to store installed programs and files). I have no preference for window managers; I just want something small and fast that works like Ubuntu. What is the most popular Ubuntu-like OS that can be easily run on a thumb drive? Edit: I'm not sure I understand how this works. I don't to use a USB drive as a LiveCD; I want to plug in a USB stick and use the computer as if it was my own. In other words, I want to be able to install programs on the drive on one computer and use them on another. Do any of these OSs let me do that? Please forgive my ignorance.

    Read the article

  • High load without explanation

    - by Sebastian
    I have a very high load on my machine and don't know what is responsible or how to find out. On the machine runs a jboss appserver and mysql. Here is a top from the user at peak time: top - 16:23:01 up 101 days, 6:50, 1 user, load average: 23.42, 21.53, 24.73 Tasks: 9 total, 1 running, 8 sleeping, 0 stopped, 0 zombie Cpu(s): 17.2%us, 1.6%sy, 0.0%ni, 80.4%id, 0.1%wa, 0.1%hi, 0.7%si, 0.0%st Mem: 16440784k total, 16263720k used, 177064k free, 151916k buffers Swap: 16780872k total, 30428k used, 16750444k free, 8963648k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 27344 b 40 0 16.0g 6.5g 14m S 169 41.7 1184:09 java 6047 b 40 0 11484 1232 1228 S 0 0.0 0:00.01 mysqld_safe 6192 b 40 0 604m 182m 4696 S 0 1.1 93:30.40 mysqld 7948 b 40 0 84036 1968 1176 S 0 0.0 0:00.07 sshd 7949 b 40 0 14004 2900 1608 S 0 0.0 0:00.03 bash 7975 b 40 0 8604 1044 840 S 0 0.0 0:00.44 top The CPU usage of the java process is normal. The peaks only show up when i deployed a certain web application. Could the resulting network traffic boost the load in such way that i don't see it in top?

    Read the article

  • Problem using a public key when connecting to a SSH server running on Cygwin

    - by Deleted
    We have installed Cygwin on a Windows Server 2008 Standard server and it working pretty well. Unfortunately we still have a big problem. We want to connect using a public key through SSH which doesn't work. It always falls back to using password login. We have appended our public key to ~/.ssh/authorized_keys on the server and we have our private and public key in ~/.ssh/id_dsa respective ~/.ssh/id_dsa.pub on the client. When debugging the SSH login session we see that the key is offered by the server apparently rejects it by some unknown reason. The SSH output when connecting from an Ubuntu 9.10 desktop with debug information enabled: $ ssh -v 192.168.10.11 OpenSSH_5.1p1 Debian-6ubuntu2, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /home/myuseraccount/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for debug1: Connecting to 192.168.10.11 [192.168.10.11] port 22. debug1: Connection established. debug1: identity file /home/myuseraccount/.ssh/identity type -1 debug1: identity file /home/myuseraccount/.ssh/id_rsa type -1 debug1: identity file /home/myuseraccount/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-6ubuntu2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '192.168.10.11' is known and matches the RSA host key. debug1: Found key in /home/myuseraccount/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: /home/myuseraccount/.ssh/id_dsa debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: /home/myuseraccount/.ssh/identity debug1: Trying private key: /home/myuseraccount/.ssh/id_rsa debug1: Next authentication method: keyboard-interactive debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: password [email protected]'s password: The version of Cygwin: $ uname -a CYGWIN_NT-6.0 servername 1.7.1(0.218/5/3) 2009-12-07 11:48 i686 Cygwin The installed packages: $ cygcheck -c Cygwin Package Information Package Version Status _update-info-dir 00871-1 OK alternatives 1.3.30c-10 OK arj 3.10.22-1 OK aspell 0.60.5-1 OK aspell-en 6.0.0-1 OK aspell-sv 0.50.2-2 OK autossh 1.4b-1 OK base-cygwin 2.1-1 OK base-files 3.9-3 OK base-passwd 3.1-1 OK bash 3.2.49-23 OK bash-completion 1.1-2 OK bc 1.06-2 OK bzip2 1.0.5-10 OK cabextract 1.1-1 OK compface 1.5.2-1 OK coreutils 7.0-2 OK cron 4.1-59 OK crypt 1.1-1 OK csih 0.9.1-1 OK curl 7.19.6-1 OK cvs 1.12.13-10 OK cvsutils 0.2.5-1 OK cygrunsrv 1.34-1 OK cygutils 1.4.2-1 OK cygwin 1.7.1-1 OK cygwin-doc 1.5-1 OK cygwin-x-doc 1.1.0-1 OK dash 0.5.5.1-2 OK diffutils 2.8.7-2 OK doxygen 1.6.1-2 OK e2fsprogs 1.35-3 OK editrights 1.01-2 OK emacs 23.1-10 OK emacs-X11 23.1-10 OK file 5.04-1 OK findutils 4.5.5-1 OK flip 1.19-1 OK font-adobe-dpi75 1.0.1-1 OK font-alias 1.0.2-1 OK font-encodings 1.0.3-1 OK font-misc-misc 1.1.0-1 OK fontconfig 2.8.0-1 OK gamin 0.1.10-10 OK gawk 3.1.7-1 OK gettext 0.17-11 OK gnome-icon-theme 2.28.0-1 OK grep 2.5.4-2 OK groff 1.19.2-2 OK gvim 7.2.264-1 OK gzip 1.3.12-2 OK hicolor-icon-theme 0.11-1 OK inetutils 1.5-6 OK ipc-utils 1.0-1 OK keychain 2.6.8-1 OK less 429-1 OK libaspell15 0.60.5-1 OK libatk1.0_0 1.28.0-1 OK libaudio2 1.9.2-1 OK libbz2_1 1.0.5-10 OK libcairo2 1.8.8-1 OK libcurl4 7.19.6-1 OK libdb4.2 4.2.52.5-2 OK libdb4.5 4.5.20.2-2 OK libexpat1 2.0.1-1 OK libfam0 0.1.10-10 OK libfontconfig1 2.8.0-1 OK libfontenc1 1.0.5-1 OK libfreetype6 2.3.12-1 OK libgcc1 4.3.4-3 OK libgdbm4 1.8.3-20 OK libgdk_pixbuf2.0_0 2.18.6-1 OK libgif4 4.1.6-10 OK libGL1 7.6.1-1 OK libglib2.0_0 2.22.4-2 OK libglitz1 0.5.6-10 OK libgmp3 4.3.1-3 OK libgtk2.0_0 2.18.6-1 OK libICE6 1.0.6-1 OK libiconv2 1.13.1-1 OK libidn11 1.16-1 OK libintl3 0.14.5-1 OK libintl8 0.17-11 OK libjasper1 1.900.1-1 OK libjbig2 2.0-11 OK libjpeg62 6b-21 OK libjpeg7 7-10 OK liblzma1 4.999.9beta-10 OK libncurses10 5.7-18 OK libncurses8 5.5-10 OK libncurses9 5.7-16 OK libopenldap2_3_0 2.3.43-1 OK libpango1.0_0 1.26.2-1 OK libpcre0 8.00-1 OK libpixman1_0 0.16.6-1 OK libpng12 1.2.35-10 OK libpopt0 1.6.4-4 OK libpq5 8.2.11-1 OK libreadline6 5.2.14-12 OK libreadline7 6.0.3-2 OK libsasl2 2.1.19-3 OK libSM6 1.1.1-1 OK libssh2_1 1.2.2-1 OK libssp0 4.3.4-3 OK libstdc++6 4.3.4-3 OK libtiff5 3.9.2-1 OK libwrap0 7.6-20 OK libX11_6 1.3.3-1 OK libXau6 1.0.5-1 OK libXaw3d7 1.5D-8 OK libXaw7 1.0.7-1 OK libxcb-render-util0 0.3.6-1 OK libxcb-render0 1.5-1 OK libxcb1 1.5-1 OK libXcomposite1 0.4.1-1 OK libXcursor1 1.1.10-1 OK libXdamage1 1.1.2-1 OK libXdmcp6 1.0.3-1 OK libXext6 1.1.1-1 OK libXfixes3 4.0.4-1 OK libXft2 2.1.14-1 OK libXi6 1.3-1 OK libXinerama1 1.1-1 OK libxkbfile1 1.0.6-1 OK libxml2 2.7.6-1 OK libXmu6 1.0.5-1 OK libXmuu1 1.0.5-1 OK libXpm4 3.5.8-1 OK libXrandr2 1.3.0-10 OK libXrender1 0.9.5-1 OK libXt6 1.0.7-1 OK links 1.00pre20-1 OK login 1.10-10 OK luit 1.0.5-1 OK lynx 2.8.5-4 OK man 1.6e-1 OK minires 1.02-1 OK mkfontdir 1.0.5-1 OK mkfontscale 1.0.7-1 OK openssh 5.4p1-1 OK openssl 0.9.8m-1 OK patch 2.5.8-9 OK patchutils 0.3.1-1 OK perl 5.10.1-3 OK rebase 3.0.1-1 OK run 1.1.12-11 OK screen 4.0.3-5 OK sed 4.1.5-2 OK shared-mime-info 0.70-1 OK tar 1.22.90-1 OK terminfo 5.7_20091114-13 OK terminfo0 5.5_20061104-11 OK texinfo 4.13-3 OK tidy 041206-1 OK time 1.7-2 OK tzcode 2009k-1 OK unzip 6.0-10 OK util-linux 2.14.1-1 OK vim 7.2.264-2 OK wget 1.11.4-4 OK which 2.20-2 OK wput 0.6.1-2 OK xauth 1.0.4-1 OK xclipboard 1.1.0-1 OK xcursor-themes 1.0.2-1 OK xemacs 21.4.22-1 OK xemacs-emacs-common 21.4.22-1 OK xemacs-sumo 2007-04-27-1 OK xemacs-tags 21.4.22-1 OK xeyes 1.1.0-1 OK xinit 1.2.1-1 OK xinput 1.5.0-1 OK xkbcomp 1.1.1-1 OK xkeyboard-config 1.8-1 OK xkill 1.0.2-1 OK xmodmap 1.0.4-1 OK xorg-docs 1.5-1 OK xorg-server 1.7.6-2 OK xrdb 1.0.6-1 OK xset 1.1.0-1 OK xterm 255-1 OK xz 4.999.9beta-10 OK zip 3.0-11 OK zlib 1.2.3-10 OK zlib-devel 1.2.3-10 OK zlib0 1.2.3-10 OK The ssh deamon configuration file: $ cat /etc/sshd_config # $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/bin:/usr/sbin:/sbin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. Port 22 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # Disable legacy (protocol version 1) support in the server for new # installations. In future the default will change to require explicit # activation of protocol 1 Protocol 2 # HostKey for protocol version 1 #HostKey /etc/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh_host_rsa_key #HostKey /etc/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 1024 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH #LogLevel INFO # Authentication: #LoginGraceTime 2m #PermitRootLogin yes StrictModes no #MaxAuthTries 6 #MaxSessions 10 RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. #UsePAM no AllowAgentForwarding yes AllowTcpForwarding yes GatewayPorts yes X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost no #PrintMotd yes #PrintLastLog yes TCPKeepAlive yes #UseLogin no UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner none # override default of no subsystems Subsystem sftp /usr/sbin/sftp-server # Example of overriding settings on a per-user basis #Match User anoncvs #X11Forwarding yes #AllowTcpForwarding yes #ForceCommand cvs server I hope this information is enough to solve the problem. In case any more is needed please comment and I'll add it. Thank you for reading!

    Read the article

  • PS/2 to USB adapter doesn't work with Model M keyboard

    - by mickburkejnr
    I bought a server about 3 months ago from a friend, and I have only had time to tinker with it in the last week. I noticed that this server doesn't have any PS/2 ports, which meant configuring it was near impossible. I don't have any USB keyboards in the house, I only have an IBM Model M keyboard (built 1994) and another IBM keyboard that was built circa 2001. Both of them have PS/2 connections. I bought an adapter off eBay, and when I used it with the Model M keyboard the three lights on the keyboard flashed for a split second, but then the keyboard is then unresponsive. I can bash away at the keys for ages and nothing will happen. The same applies to the later built IBM keyboard. What could I do to make the adapter work? I am getting the loan of a USB keyboard in two weeks time, but I'd like a more permanent solution without having to rely on getting the loan of a keyboard every time I have to perform maintenance on the server. And as I already have two keyboards which work fine and I like using, I don't really want to have to buy another keyboard just for use on the server.

    Read the article

  • /etc/crontab or any user crontab is not being executed

    - by ian
    My server is CentOS 5. When I edit /etc/crontab or edit any user(including root) crontab via "crontab -e" command, it just adds "(system) RELOAD (/etc/crontab)" or "(admin) RELOAD (cron/admin)" in the log. No CMD in the /var/log/cron. Sample entry in /var/log/cron: Aug 10 10:21:33 localhost crontab[31688]: (root) BEGIN EDIT (root) Aug 10 10:21:42 localhost crontab[31688]: (root) REPLACE (root) Aug 10 10:21:42 localhost crontab[31688]: (root) END EDIT (root) Aug 10 10:22:01 localhost crond[2688]: (root) RELOAD (cron/root) Result of "service crond status": crond (pid 1345) is running... The command "cat /var/log/messages | grep cron" does not give anything. Contents of /etc/cron.allow: admin root Contents of /etc/crontab: SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root HOME=/ # run-parts 01 * * * * root run-parts /etc/cron.hourly 02 4 * * * root run-parts /etc/cron.daily 22 4 * * 0 root run-parts /etc/cron.weekly 42 4 1 * * root run-parts /etc/cron.monthly * * * * * root run-parts /bin/date >> /data/date.txt Result of ps aux |grep cron: root 1345 0.0 0.1 5268 1204 ? Ss 11:43 0:00 crond Contents of admin's crontab: * * * * * /bin/date >> /data/date.txt Note that it's not only admin's crontab that's not running. All cron jobs are not running. Any ideas why they aren't running?

    Read the article

  • Start and shutdown tomcat via ssh

    - by bxshi
    Updated I find out that the path of jdk it using is wrong. eval: 1: /opt/Java/jdk1.6.0_25/jre/bin/java: not found the Java should be lower case java, how is that happen? When I run this script directly on server, it is just okay. I'm trying to start or shutdown tomcat via a remote client. On my server, I've got 3 different tomcat: tomcat1, tomcat2, and tomcat3. Firstly, I've tried to run tomcat_path/bin/shutdown.sh to stop it via ssh, and the command is ssh [email protected] "cd /home/jake/tomcat2/bin;exec bash ./shutdown.sh" both " and ' are tried, but do not work, the output is eval: 1: /opt/Java/jdk1.6.0_25/jre/bin/java: not found it seems that the shell script runs on my local client, because on server it has this file. Is there any way to run a shell script on remote server correctly? updated I've run ssh [email protected] "sh -x /home/jake/tomcat/bin/shutdown.sh > /home/jake/tomcat.log 2>&1" and the output in tomcat.log is : + PRG=/home/jake/tomcat/bin/shutdown.sh + [ -h /home/jake/tomcat/bin/shutdown.sh ] + dirname /home/jake/tomcat/bin/shutdown.sh + PRGDIR=/home/jake/tomcat/bin + EXECUTABLE=catalina.sh + [ ! -x /home/jake/tomcat/bin/catalina.sh ] + exec /home/jake/tomcat/bin/catalina.sh stop eval: 1: /opt/Java/jdk1.6.0_25/jre/bin/java: not found

    Read the article

  • How can I have puppet deploy ssh keys for virtual users?

    - by Pheezy
    I am trying to get puppet to assign authorized ssh keys for virtual users but I keep getting the following error: err: Could not retrieve catalog: Could not parse for environment production: Syntax error at 'user'; expected '}' at /etc/puppet/modules/users/manifests/ssh_authorized_keys.pp:9 I believe my configuration are correct (listed below) but is there a syntax error or scoping issue I am missing? I would simply like to assign users to nodes and have those users automagically have their ssh keys installed. Is there maybe a better way to do this and I'm just overthinking it? # /etc/puppet/modules/users/virtual.pp class user::virtual { @user { "user": home => "/home/user", ensure => "present", groups => ["root","wheel"], uid => "8001", password => "SCRAMBLED", comment => "User", shell => "/bin/bash", managehome => "true", } # /etc/puppet/modules/users/manifests/ssh_authorized_keys.pp ssh_authorized_key { "user": ensure => "present", type => "ssh-dss", key => "AAAAB....", user => "user", } # /etc/puppet/modules/users/init.pp import "users.pp" import "ssh_authorized_keys.pp" class user::ops inherits user::virtual { realize( User["user"], ) } # /etc/puppet/manifests/modules.pp import "sudo" import "users" # /etc/puppet/manifests/nodes.pp node basenode { include sudo } node 'testbox' inherits basenode { include user::ops } # /etc/puppet/manifests/site.pp import "modules" import "nodes" # The filebucket option allows for file backups to the server filebucket { main: server => 'puppet' } # Set global defaults - including backing up all files to the main filebucket and adds a global path File { backup => main } Exec { path => "/usr/bin:/usr/sbin/:/bin:/sbin" }

    Read the article

  • Upgrading from MVC 1.0 to MVC2 in Visual Studio 2010 and VS2008.

    - by Sam Abraham
    With MVC2 officially released, I was involved in a few conversations regarding the feasibility of upgrading existing MVC 1.0 projects to quickly leverage the newly introduced MVC features. Luckily, Microsoft has proactively addressed this question for both Visual Studio 2008 and 2010 and many online resources discussing the upgrade process are a "Bing/Google Search" away. As I will happen to be speaking about MVC2 and Visual Studio 2010 at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (Check http://www.fladotnet.com for more info.), I decided to include a quick demo on upgrading the NerdDinner project (which I consider the "Hello MVC World" project) from MVC 1.0 to MVC2 using Visual studio 2010 to demonstrate how simple the upgrade process is. In the next few lines, I will be briefly touching on upgrading to MVC2 for Visual Studio 2008 then discussing, in more detail, the upgrade process using Visual Studio 2010 while highlighting the advantage of its multi-targeting support. Using Visual Studio 2008 SP1 For upgrading to MVC2 Using VS2008 SP1, a Microsoft White Paper [1] presents two approaches:  1- Using a provided automated upgrade tool, 2-Manually upgrading the project. I personally prefer using the automated tool although it comes with an "AS IS" disclaimer. For those brave souls, or those who end up with no luck using the tool, detailed manual upgrade steps are also provided as a second option. Backing up the project in question is a must regardless of which route one would take to upgrade. Using Visual Studio 2010 Life is much easier for developers who already adopted Visual Studio 2010. Simply opening the MVC 1.0 solution file brings up the upgrade wizard as shown in figures 1, 2, 3 and 4. As we proceed with the upgrade process, the wizard requests confirmation on whether we choose to upgrade our target framework version to .Net 4.0 or keep the existing .Net 3.5 (Figure 5). VS2010 does a good job with multi-targeting where we can still develop .Net 3.5 applications while leveraging all the new bells and whistles that VS2010 brings to the table (Multi-targeting enables us to develop with as early as .Net 2.0 in VS2010) Figure 1 - Open Solution File Using VS2010   Figure 2 - VS2010 Conversion Wizard Figure 3- Ready To Convert To VS2010 Confirmation Screen Figure 4 - VS2010 Solution Conversion Progress Figure 5 - Confirm Target Framework Upgrade In an attempt to make my demonstration realistic, I decided to opt to keep the project targeted to the .Net 3.5 Framework.  After the successful completion of the conversion process,  a quick sanity check revealed that the NerdDinner project is still targeted to the .Net 3.5 framework as shown in figure 6. Inspecting the Web.Config revealed that the MVC DLL version our code compiles against has been successfully upgraded to 2.0 (Figure 7) and hence we should now be able to leverage the newly introduced features in MVC2 and VS2010 with no effort or time invested on modifying existing code. Figure 6- Confirm Target Framework Remained .Net 3.5  Figure 7 - Confirm MVC DLL Version Has Been Upgraded In Conclusion, Microsoft has empowered developers with the tools necessary to quickly and seamlessly upgrade their MVC solutions to the newly released MVC2. The multi-targeting feature in Visual Studio 2010 enables us to adopt this latest and greatest development tool while supporting development in as early as .Net 2.0. References 1. "Upgrading an ASP.NET MVC 1.0 Application to ASP.NET MVC 2" http://www.asp.net/learn/whitepapers/aspnet-mvc2-upgrade-notes

    Read the article

  • Rsync: General file/folder synchronization

    - by Rey Leonard Amorato
    I have a file server, which is in-charge of pulling a folder tree from multiple workstations on a daily basis. My current method for this is by using rsync, (which works pretty well provided directory names and/or files remain the same) however, when files are renamed or moved about within subdir1, rsync will copy them over to the server, creating duplicates. I have to manually find and delete extraneous files/folders that had been left on the server during previous syncs. Note that I cannot use rsync's --delete flag because any sync from a workstation will then mirror that particular folder tree, instead of merging them to the server. Visual diagram: Server: Workstation1 Workstation2 Workstation(n) Folder* Folder* Folder* Folder* -subdir1 -subdir1 -subdir1 -subdir(n) -file1 -file1 -file2 -file(n) -file2 -file(n) Is there a simple script (preferably in bash, nothing fancy) that can accomplish the deletion of the extraneous files/folders in the event a file is renamed or moved to a different subdir? Is there a different program, much like rsync that can accomplish this task autonomously and in a much simpler manner? I have looked at unison, but I did not like the fact that it keeps a local database for the syncing info. Any tips at all as to how I am supposed to tackle this? Thank you in advanced for your help. EDIT: I have tried unison just recently and I can safely say it is out of the question now. unison is a bi-directional synchronization tool and from my testing, it mirrors the files existing on the server to all workstations. - This is unwanted. preferably, i would want files/folders to stay within their respective workstations and just merge to the server. AKA uni-directional sync; but with renames/moves propagated to the server. I might have to look into Git/Mercurial/Bazaar as mentioned by kyle, but still unsure if they are fit for the job.

    Read the article

  • Can't Install msysgit/tortoisegit

    - by Jay
    I ran msysGit-netinstall-1.7.0.2-preview20100407-2.exe.   (http://code.google.com/p/msysgit/downloads/list) Then I ran TortoiseGit-1.4.4.0-64bit.msi.   (http://code.google.com/p/tortoisegit/downloads/list) msysgit was installed in C:\ TortioseGit appears to have been installed in C:\Program Files\TortoiseGit I have: "Git Clone..." "Git Create repository here" "TortoiseGit" in Explorer context menu. When I try to clone, I get "git have not installed" [sic]. I have tried setting the MSysGit path, in the TortioseGit settings, to everything imaginable. Nothing works. Neither C:\Program Files or C:\Program Files (x86) have a Git folder. The git command gives "command not found" from both cmd.exe and bash (that msysgit installed) I don't not see msysgit in - Control Panel - Programs - Program Features, but I do see TortioseGit in there. I would like a procedure for verifying that msysgit is properly installed. A procedure for uninstalling msysgit would be an added bonus. I would like a procedure for getting TortoiseGit to work. I am running Windows 7 on a MacBook Pro.

    Read the article

  • NIS user not being added to NIS group

    - by Brian
    I have set up a NIS server and several NIS clients. I have a user and a group on the NIS server like so: /etc/passwd: myself:x:5000:5000:,,,:/home/myself:/bin/bash /etc/group: fishy:x:3001:otheruser,etc,myself,moreppl I imported the users and groups on the NIS client by adding +:::::: to /etc/passwd and +::: to /etc/group. I can log in to the NIS client, but when I run groups, fishy is not listed. But getent group fishy shows that it was imported correctly and lists me as a member. And if I do sudo su - myself, then suddenly groups says I am in the group! I also had nscd installed, and the groups worked correctly for a while. It seemed like after being logged in for a while, I would silently be dropped out of the group. If I restarted nscd and logged in again, then the groups worked correctly...for a while. There are no UID or GID conflicts with local users or groups. Update: Contents of /etc/nsswitch.conf: passwd: compat group: compat shadow: compat hosts: files nis dns networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis aliases: nis files

    Read the article

  • Copying email with qmail and Plesk

    - by Greg
    I need to keep a copy of all outgoing and incoming email (for a single domain if possible) using qmail or Plesk. I can't recompile qmail, so qmailtap is out of the question, as is setting QUEUE_EXTRA in extra.h. I'm pretty sure it should be possible with Plesk's mailmng utility, aka Mail Handlers but I'm having trouble getting them to work. I've registered 2 hooks: incoming hook ./mailmng --add-handler --handler-name=incoming --recipient-domain=example.com --executable=/xxx/incoming.sh --context=/xxx/incoming/ --hook=before-local incoming.sh #!/bin/bash # The email is passed on stdin - grab it to a variable e=`cat -` # $1 = context (/xxx/incoming) # $3 = recipient ([email protected]) # Create /xxx/incoming/[email protected] mkdir -p $1$3 # Save the email to /xxx/incoming/[email protected]/0123456789.txt echo "$e" > $1$3/`date +%s%N`.txt # Echo PASS to stderr echo 'PASS' >&2 # Echo the email to stdout echo "$e" outgoing hook # ./mailmng --add-handler --handler-name=outgoing --sender-domain=holidaysplease.com --executable=/xxx/outgoing.sh --context=/xxx/outgoing/ --hook=before-remote The outgoing.sh file is the same as incoming.sh, except replace $3 (recipient) with $2 (sender). The incoming hook does work, but saves 2 copies of each email - one before and one after SpamAssassin has run. The outgoing hook doesn't seem to get called at all. So finally, my questions are: How can I make the incoming hook save only a single copy (preferably after SpamAssassin has run)? How can I get the outgoing hook to work?

    Read the article

  • Error installing new rails version. Failed to build gem native extension.

    - by davidcmolina
    I am trying to build my first ruby on rails app using the following guide (http://ruby.railstutorial.org/chapters/a-demo-app#code-demo_gemfile_sqlite_version_redux) and have run into a few obstacles. The first, receiving errors when upgrading to the latest rails version 3.2.8. bash-3.2$ gem install rails Building native extensions. This could take a while... ERROR: Error installing rails: ERROR: Failed to build gem native extension. /Users/davidmolina/.rvm/rubies/ruby-1.9.3-p194/bin/ruby extconf.rb creating Makefile make compiling generator.c make: /usr/bin/gcc-4.2: No such file or directory make: *** [generator.o] Error 1 Gem files will remain installed in /Users/davidmolina/.rvm/gems/ruby-1.9.3-p194/gems/json-1.7.5 for inspection. Results logged to /Users/davidmolina/.rvm/gems/ruby-1.9.3-p194/gems/json-1.7.5/ext/json/ext/generator/gem_make.out Even when trying to install from rails app: $ gem install rails Building native extensions. This could take a while... ERROR: Error installing rails: ERROR: Failed to build gem native extension. /Users/davidmolina/.rvm/rubies/ruby-1.9.3-p194/bin/ruby extconf.rb creating Makefile make compiling generator.c make: /usr/bin/gcc-4.2: No such file or directory make: *** [generator.o] Error 1 Gem files will remain installed in /Users/davidmolina/.rvm/gems/ruby-1.9.3-p194/gems/json-1.7.5 for inspection. Results logged to /Users/davidmolina/.rvm/gems/ruby-1.9.3-p194/gems/json-1.7.5/ext/json/ext/generator/gem_make.out When trying to Bundle Install: $ bundle install Could not locate Gemfile Background details: Mac OS X Version 10.8.2 Ruby 1.9.3 Rails 2.3.4 I'm wondering if there is a direct one-liner or gem that is missing?

    Read the article

  • OpenLDAP Authentication UID vs CN issues

    - by user145457
    I'm having trouble authenticating services using uid for authentication, which I thought was the standard method for authentication on the user. So basically, my users are added in ldap like this: # jsmith, Users, example.com dn: uid=jsmith,ou=Users,dc=example,dc=com uidNumber: 10003 loginShell: /bin/bash sn: Smith mail: [email protected] homeDirectory: /home/jsmith displayName: John Smith givenName: John uid: jsmith gecos: John Smith gidNumber: 10000 cn: John Smith title: System Administrator But when I try to authenticate using typical webapps or services like this: jsmith password I get: ldapsearch -x -h ldap.example.com -D "cn=jsmith,ou=Users,dc=example,dc=com" -W -b "dc=example,dc=com" Enter LDAP Password: ldap_bind: Invalid credentials (49) But if I use: ldapsearch -x -h ldap.example.com -D "uid=jsmith,ou=Users,dc=example,dc=com" -W -b "dc=example,dc=com" It works. HOWEVER...most webapps and authentication methods seem to use another method. So on a webapp I'm using, unless I specify the user as: uid=smith,ou=users,dc=example,dc=com Nothing works. In the webapp I just need users to put: jsmith in the user field. Keep in mind my ldap is using the "new" cn=config method of storing settings. So if someone has an obvious ldif I'm missing please provide. Let me know if you need further info. This is openldap on ubuntu 12.04. Thanks, Dave

    Read the article

  • Is it possible to install ffmpeg and x264 on a Synology Diskstation 209?

    - by Kieran Benton
    Hi, Complete linux novice here! :) I'm trying to get my brilliant DS209 NAS box to do some transcoding for me of a few AVI videos to a format suitable for my Apply iTouch - yes I could do it with another machine and Handbrake but it would be really useful to offload some of this to the NAS to do overnight. I've managed to install ipkg onto my DS209 NAS box and have played around with installing some packages (binutils, mono, bash etc). I've even managed to install ffmpeg from ipkg and put together the correct command line profile to do the encoding as a .sh file: time ffmpeg -y -i $1 -f mp4 -title $2 -vcodec libx264 -level 21 -s 426×320 -b 512k -bt 512k -bufsize 4M -maxrate 4M -g 250 -coder 0 -threads 0 -acodec libfaac -ac 2 -ab 64k $3 However running this I get a missing dependency on libx264. I've tried building this from the latest source in git, but I get errors during the make process that I just don't understand (way out of my depth). encoder/set.c: In function 'x264_sei_version_write': encoder/set.c:491: error: 'X264_VERSION' undeclared (first use in this function) encoder/set.c:491: error: (Each undeclared identifier is reported only once encoder/set.c:491: error: for each function it appears in.) make: *** [encoder/set.o] Error 1 Can anyone else try building it or give me a pointer as to what I can do to get this going? Its been a good learning experience so far! Thanks.

    Read the article

  • Use Case Actors - Primary versus Secondary

    - by Dave Burke
    The Unified Modeling Language (UML1) defines an Actor (from UseCases) as: An actor specifies a role played by a user or any other system that interacts with the subject. In Alistair Cockburn’s book “Writing Effective Use Cases” (2) Actors are further defined as follows: Primary Actor: The primary actor of a use case is the stakeholder that calls on the system to deliver one of its services. It has a goal with respect to the system – one that can be satisfied by its operation. The primary actor is often, but not always, the actor who triggers the use case. Supporting Actors: A supporting actor in a use case in an external actor that provides a service to the system under design. It might be a high-speed printer, a web service, or humans that have to do some research and get back to us. In a 2006 article (3) Cockburn refined the definitions slightly to read: Primary Actors: The Actor(s) using the system to achieve a goal. The Use Case documents the interactions between the system and the actors to achieve the goal of the primary actor. Secondary Actors: Actors that the system needs assistance from to achieve the primary actor’s goal. Finally, the Oracle Unified Method (OUM) concurs with the UML definition of Actors, along with Cockburn’s refinement, but OUM also includes the following: Secondary actors may or may not have goals that they expect to be satisfied by the use case, the primary actor always has a goal, and the use case exists to satisfy the primary actor. Now that we are on the same “page”, let’s consider two examples: A bank loan officer wants to review a loan application from a customer, and part of the process involves a real-time credit rating check. Use Case Name: Review Loan Application Primary Actor: Loan Officer Secondary Actors: Credit Rating System A Human Resources manager wants to change the job code of an employee, and as part of the process, automatically notify several other departments within the company of the change. Use Case Name: Maintain Job Code Primary Actor: Human Resources Manager Secondary Actors: None The first example is quite straight forward; we need to define the Secondary Actor because without the “Credit Rating System” we cannot successfully complete the Use Case. In other words, the goal of the Primary Actor is to successfully complete the Loan Application, but they need the explicit “help” of the Secondary Actor (Credit Rating System) to achieve this goal. The second example is where people sometimes get confused. Within OUM we would not include the “other departments” as Secondary Actors and therefore not include them on the Use Case diagram for the following reasons: The other departments are not required for the successful completion of the Use Case We are not expecting any response from the other departments (at least within the bounds of the Use Case under discussion) Having said that, within the detail of the Use Case Specification Main Success Scenario, we would include something like: “The system sends a notification to the related department heads (ref. Business Rule BR101)” Now let’s consider one final example. A Procurement Manager wants to place a “bid” for some goods using an On-Line Trading Community (B2B version of eBay) Use Case Name: Create Bid Primary Actor: Procurement Manager Secondary Actors: On-Line Trading Community You might wonder why the Trading Community is listed as a Secondary Actor, i.e. if all we are going to do is place a bid for a specific quantity of goods at a given price and send that off to the Trading Community, then why would the Trading Community need to “assist” in that Use Case? Well, once again, it comes back to the “User Experience” and how we want to optimize that when we think about our Use Case, and ultimately, when the developer comes to assembling some code. In this final example, the Procurement Manager cannot successfully complete the “Create Bid” Use Case until they receive an affirmative confirmation back from the Trading Community that the Bid has been accepted. Therefore, the Trading Community must become a Secondary Actor and be referenced both on the Use Case diagram and Use Case Specification. Any astute readers who are wondering about the “single sitting” rule will have to wait for a follow-up Blog entry to find out how that consideration can be factored in!!! Happy Use Case writing! (1) OMG Unified Modeling LanguageTM (OMG UML), Superstructure Version 2.4.1 (2) Cockburn, A, 2000, Writing Effective Use Case, Addison-Wesley Professional; Edition 1 (3) Cockburn, A, 2006 “Use Case fundamentals” viewed 20th March 2012, http://alistair.cockburn.us/Use+case+fundamentals

    Read the article

  • Linux: prevent VNC from swapping like mad

    - by Weezy
    I'm accessing a MacMini (with MacOS X 10.4) from my Linux machine using VNC and there's an issue that is driving me crazy... My Linux machine has 4 GB of ram and I run a lot of various apps on it and I've got no issue at all. It's all snappy and don't hear the hard disk swapping/read/writing too often. Now with VNC, the hard disk is swapping like mad... When I'm moving things on the OS X desktop. So I was thinking of creating a ramdisk and forcing the temp VNC files to go into that ramdisk but the problem is I can't find any temp files. I've attempted to do that: #!/bin/bash while [ true ] do lsof | grep vnc done And eyeball parse the output to try to find some temp file: no luck. The VNC version I'm using is this one: $ vncviewer -version VNC Viewer Free Edition 4.1.1 for X - built Jan 30 2009 19:33:16 Copyright (C) 2002-2005 RealVNC Ltd. No matter how much data is coming from the Mac, there should be plenty of memory (4 GB of ram) so there's really no reason to swap like crazy. This is driving me mad. Any help as to how I could solve this problem is most welcome because this is literally driving me nuts.

    Read the article

  • Openldap with ppolicy

    - by nitins
    We have working installation of OpenLDAP version 2.4 which is using shadowAccount attributes. I want to enable ppolicy overlays. I have gone through the steps provided at OpenLDAP and ppolicy howto. I have made the changes to slapd.conf and imported the password policy. On restart OpenLDAP is working fine and I can see the password policy when I do a ldapsearch. The user object looks like given below. # extended LDIF # # LDAPv3 # base <dc=xxxxx,dc=in> with scope subtree # filter: uid=testuser # requesting: ALL # # testuser, People, xxxxxx.in dn: uid=testuser,ou=People,dc=xxxxx,dc=in uid: testuser cn: testuser objectClass: account objectClass: posixAccount objectClass: top objectClass: shadowAccount shadowMax: 90 shadowWarning: 7 loginShell: /bin/bash uidNumber: 569 gidNumber: 1005 homeDirectory: /data/testuser userPassword:: xxxxxxxxxxxxx shadowLastChange: 15079 The password policy is given below. # default, policies, xxxxxx.in dn: cn=default,ou=policies,dc=xxxxxx,dc=in objectClass: top objectClass: device objectClass: pwdPolicy cn: default pwdAttribute: userPassword pwdMaxAge: 7776002 pwdExpireWarning: 432000 pwdInHistory: 0 pwdCheckQuality: 1 pwdMinLength: 8 pwdMaxFailure: 5 pwdLockout: TRUE pwdLockoutDuration: 900 pwdGraceAuthNLimit: 0 pwdFailureCountInterval: 0 pwdMustChange: TRUE pwdAllowUserChange: TRUE pwdSafeModify: FALSE I do not what should be done after this. How can the shadowAccount attributes be replaced with the password policy.

    Read the article

  • Trying to delete a directory stored on a WIndows server, mounted on a mac

    - by AdamG
    I am trying to delete a directory stored on a Windows 2008 R2 server, mounted on a Mac as network home (10.8.5). The directory was created by Safari and stores temporary internet files. I need to be able to delete this folder on logout from a Mac bash script. The Terminal on Mac shows the directory as empty: 36W-FacRm-02:History lwickham$ cd /home/lwickham/Library/Caches/Metadata/Safari/History 36W-FacRm-02:History lwickham$ ls -al total 0 drwx------ 1 lwickham CGPS\Domain Users 264 Nov 8 09:24 . drwx------ 1 lwickham CGPS\Domain Users 264 Nov 8 09:28 .. However, on the Windows server it has a single 0kb file that doesn't start with a "." but yet is invisible to the Mac. E:\FacultyHome2\lwickham\Library\Caches\Metadata\Safari\History>dir Volume in drive E is FacultyUsers2 Volume Serial Number is 8C17-4EF3 Directory of E:\FacultyHome2\lwickham\Library\Caches\Metadata\Safari\History 11/08/2013 09:24 AM <DIR> . 11/08/2013 09:24 AM <DIR> .. 11/07/2013 04:28 PM 0 http?%2F%2Fwww.google.com%2Furl?sa=t&rct= j&q=&esrc=s&source=web&cd=6&ved=0CFsQFjAF&url=http%253A%252F%252Fwww.usbanklocat ions.com%252Fhsbc-bank-usa-96th-street-branch.html&ei=5vR7UtmXEPjfsATe0YCIBA&usg =AFQjCNF9ypKbpYbXRng00FY3W8Y6cF1Tiw&bvm=bv.56146854,d. 1 File(s) 0 bytes 2 Dir(s) 514,231,967,744 bytes free All my attempts to delete the dir from the Mac have failed: 36W-FacRm-02:History lwickham$ rm -fr /home/lwickham/Library/Caches/Metadata/Safari/History/* 36W-FacRm-02:History lwickham$ rm -frd /home/lwickham/Library/Caches/ rm: /home/lwickham/Library/Caches//Metadata/Safari/History: Directory not empty rm: /home/lwickham/Library/Caches//Metadata/Safari: Directory not empty rm: /home/lwickham/Library/Caches//Metadata: Directory not empty rm: /home/lwickham/Library/Caches/: Directory not empty

    Read the article

  • Conky starts above windows in Ubuntu Maverick

    - by DesertIvy
    Hey guys, I did not run into this problem until I upgraded my Ubuntu box to Maverick Meerkat (10.10). Basically, whenever I start my computer, conky runs as expected, except it gets drawn over any windows that I load (see screenshot). To fix this for a single session, I simply restart conky by running killall conky; conky in a terminal. Conky gets re-drawn below active windows (namely, only appearing on my desktop), and does not have the border/drop-shadow, but I have to do this every time I start a new session. Is there a simple way to fix this? I have a small shell script that I run on startup, but it does not seem to solve the problem. #!/bin/bash sleep 10 && conky; sleep 5 && killall conky; conky; Below is the non-text part of my .conkyrc file. # Conky settings # background yes update_interval 1 cpu_avg_samples 2 net_avg_samples 2 override_utf8_locale yes double_buffer yes no_buffers yes text_buffer_size 2048 #imlib_cache_size 0 temperature_unit fahrenheit # Window specifications # own_window yes own_window_type override own_window_transparent yes own_window_hints undecorate,sticky,skip_taskbar,skip_pager,below border_inner_margin 0 border_outer_margin 0 minimum_size 200 250 maximum_width 200 alignment tr gap_x 220 gap_y 280 # Graphics settings # draw_shades no draw_outline no draw_borders no draw_graph_borders no # Text settings # use_xft yes xftfont caviar dreams:size=8 xftalpha 0.5 uppercase no temperature_unit celsius default_color FFFFFF # Lua Load # lua_load ~/.lua/scripts/clock_rings.lua lua_draw_hook_pre clock_rings

    Read the article

  • Script errors when run by launchd at startup, but not when run in Terminal

    - by Mechcozmo
    Hello. I'm attempting to create a RAM disk that loads the previous contents when the system starts up, and every six hours writes the contents to a disk image. Currently, when you run the script from the terminal ("sudo bash LogToRAM.sh") everything works fine. But when run from launchd during startup, it doesn't work. Here's the lines from the log; the first line just gives some idea as to where in the boot process we are: SecurityAgent[202] Showing Login Window com.mechcozmo.LogToRAM[51] + /Developer/usr/bin/SetFile -a V /Volumes/LogfileRAMdisk com.mechcozmo.LogToRAM[51] ERROR: File Not Found. (-43) on file: /Volumes/LogfileRAMdisk com.mechcozmo.LogToRAM[51] + /usr/sbin/asr -source '/Library/Application Support/LogToRAM/RAMdisk_store.dmg' -target /Volumes/LogfileRAMdisk/ -noverify Here is the script and plist file in question. Note that 'set -vx' is up at the top of the script; it give a lot of information about what is happening in the script. My current theory is that the /Volumes directory does not exist at this stage of the boot process, but that seems unlikely to be honest.

    Read the article

  • Rsync when run in cron doesnt work. Rsync between Mac Os x Server and Linux Centos

    - by Brady
    I have a working rsync setup between Mac OS X Server and Linux Centos when run manually in a terminal. I enter the rsync command, it asks for the password, I enter it and off it goes, runs and completes. Now I know thats working I set out to fully automate it via cron. First off I create an SSH authorized key by running this command on the Mac server: ssh-keygen -t dsa -b 1024 -f /Users/admin/Documents/Backup/rsync-key Entering the password and then confirming it. I then copy the rsync-key.pub file accross to the linux server and place in the rsync user .ssh folder and rename to authorized_keys: /home/philosophy/.ssh/authorized_keys I then make sure that the authorized_keys file is chmod 600 in the folder chmod 700. I then setup a shell script for cron to run: #!/bin/bash RSYNC=/usr/bin/rsync SSH=/usr/bin/ssh KEY=/Users/admin/Documents/Backup/rsync-key RUSER=philosophy RHOST=example.com RPATH=data/ LPATH="/Volumes/G Technology G Speed eS/Backup" $RSYNC -avz --delete --progress -e "$SSH -i $KEY" "$LPATH" $RUSER@$RHOST:$RPATH Then give the shell file execute permissions and then add the following to the crontab using crontab -e: 29 12 * * * /Users/admin/Documents/Backup/backup.sh I check my crontab log file after the above command should run and I get this in the log and nothing else: Feb 21 12:29:00 fileserver /usr/sbin/cron[80598]: (admin) CMD (/Users/admin/Documents/Backup/backup.sh) So I asume everything has run as it should. But when I check the remote server no files have been copied accross. If I run the backup.sh file in a terminal as normal it still prompts for a password but this time its through the Mac Key chain system rather than typing into the console window. With the Mac Key Chain I can set it to save the password so that it doesnt ask for it again but Im sure when run with cron this password isnt picked up. This is where I'm asuming where rsync in cron is failing because it needs a password to connect but I thought the whole idea of making the SSH keys was to prevent the use of a password. Have I missed a step or done something wrong here? Thanks Scott

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >