Search Results

Search found 6637 results on 266 pages for 'usr'.

Page 226/266 | < Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >

  • Cannot find FIS partition 'initramfs'......... need help!!!

    - by vikramtheone
    Hi Guys, I have a Ubuntu 9.04 Linux running on Freescale's i.MX515(ARM Cortex based) board with me. There were about 250 updates pending and I did that today, well some of the updates failed because of the infamous errors: E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. E: Couldn't rebuild package cache E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. So, when I do the 'sudo dpkg --configure -a' I get new errors related to FIS partition: Cannot find FIS partition 'initramfs' User postinst hook script [/usr/sbin/flash-kernel] exited with value 1 dpkg: error processing linux-image-2.6.28-18-imx51 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of linux-image-imx51: linux-image-imx51 depends on linux-image-2.6.28-18-imx51; however: Package linux-image-2.6.28-18-imx51 is not configured yet. dpkg: error processing linux-image-imx51 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-imx51: linux-imx51 depends on linux-image-imx51 (= 2.6.28.18.23); however: Package linux-image-imx51 is not configured yet. dpkg: error processing linux-imx51 (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.28-18-imx51 Cannot find FIS partition 'initramfs' dpkg: subprocess post-installation script returned error exit status 1 Whats going wrong here, need help!!! I'm a newbie. Regards Vikram

    Read the article

  • FTP Server on Centos 5.8 - Transfer fails randomly

    - by Diego
    Hi have ProFTPD runningon a brand new CentOS 5.8 server with Plesk, and its behaviour is inconsistent at best. I tried to transfer a directory from my PC, and every time I get a transfer failed on a random file. It's never the same one that fails, it just fails. Sometimes it's a .gif, sometimes it's a .css, sometimes it's a JPG. Of several hundred files, a dozen is always failing for no apparent reason. The error that I get is the following: COMMAND:> [27/11/2012 11:43:52] STOR main_border.gif [27/11/2012 11:43:53] 500 Invalid command: try being more creative ERROR:> [27/11/2012 11:43:53] Syntax error: command unrecognized. The above is just an example, the "command unrecognized" occurs with LIST and other commands as well. Here's the ProFTPD configuration, just in case: ServerName "ProFTPD" #ServerType standalone ServerType inetd DefaultServer on <Global> DefaultRoot ~ psacln AllowOverwrite on </Global> DefaultTransferMode binary UseFtpUsers on TimesGMT off SetEnv TZ :/etc/localtime Port 21 Umask 022 MaxInstances 30 ScoreboardFile /var/run/proftpd/scoreboard TransferLog /usr/local/psa/var/log/xferlog #Change default group for new files and directories in vhosts dir to psacln <Directory /var/www/vhosts> GroupOwner psacln </Directory> # Enable PAM authentication AuthPAM on AuthPAMConfig proftpd IdentLookups off UseReverseDNS off AuthGroupFile /etc/group Include /etc/proftpd.include Note: file /etc/proftpd.include is blank. The above is the default configuration set by Plesk 11. I don't know much of why is that way, my knowledge of Linux System Administration is very basic and the one of ProFTPD is a complete zero. Thanks in advance for the help. Update Issue experienced with CuteFTP and FileZilla. Update Replaced ProFTPd with PureFTPd, issue persists. Sometimes I get "command unrecognized", sometimes "failed to establish data connection". I'm starting to think that it could be a network issue, but I have completely zero knowledge of networking.

    Read the article

  • YUM error. Is this a cert error

    - by Julia Roberts
    Nov 13 13:38:57 host abrt: detected unhandled Python exception in '/usr/bin/yum' Nov 13 13:38:57 host abrtd: New client connected Nov 13 13:38:57 host abrt-server[3508]: Saved Python crash dump of pid 3151 to /var/spool/abrt/pyhook-2012-11-13-13:38:57-3151 Nov 13 13:38:57 host abrtd: Directory 'pyhook-2012-11-13-13:38:57-3151' creation detected Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-former Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-release Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-rhx Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release Nov 13 13:38:57 host abrtd: Package 'yum' isn't signed with proper key Nov 13 13:38:57 host abrtd: 'post-create' on '/var/spool/abrt/pyhook-2012-11-13-13:38:57-3151' exited with 1 Nov 13 13:38:57 host abrtd: Corrupted or bad directory /var/spool/abrt/pyhook-2012-11-13-13:38:57-3151, deleting There is also nothing in the crash dump file. Ideas? yum update Loaded plugins: fastestmirror, rhnplugin, security An error has occurred: Internal Server Error See /var/log/up2date for more information Is yum broken

    Read the article

  • bash script with permanent ssh connection

    - by samuelf
    Hi, I use a bash script which runs /usr/bin/ssh -f -N -T -L8888:127.0.0.1:3306 [email protected] However, when I run the bash script, it waits.. I see the connection coming up but the script doesn't exit.. it's like it's waiting for the SSH process to finish, because when I manually kill it the bash script finishes as well. Any ideas how to resolve this? UPDATE: I have croned this script.. and the cron process is the one that becomes a zombie.. the actual scripts runs just fine, sorry about that, with ps -auxf I get: root 597 0.0 0.7 2372 912 ? Ss Jul12 0:00 cron root 2595 0.0 0.8 2552 1064 ? S 02:09 0:00 \_ CRON 1001 2597 0.0 0.0 0 0 ? Zs 02:09 0:00 \_ [sh] <defunct> 1001 2603 0.0 0.0 0 0 ? Z 02:09 0:00 \_ [cron] <defunct> and when I kill the ssh the defuncts disappear.. why would they become defunct?

    Read the article

  • CentOS tftp server is broken

    - by Mike Pennington
    I'm trying to run tftpd from xinetd on CentOS 6; however, I can only tftp from localhost. I have a file in /opt/tftpboot/fw.test.conf that I can retrieve if I tftp to localhost: [mpenning@localhost ~]$ tftp localhost tftp> get fw.test.conf tftp> quit [mpenning@localhost ~]$ ls fw.test.conf [mpenning@localhost ~]$ However, I cannot receive this file if I tftp to eth1 on this server (the address on eth1 is 172.16.1.4). [mpenning@localhost ~]$ sudo tshark -i eth1 udp and host 172.16.1.5 Running as user "root" and group "root". This could be dangerous. Capturing on eth1 0.000000 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 5.000133 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 10.000184 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 15.000297 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 20.000331 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 ^C5 packets captured [mpenning@localhost ~]$ I have the following xinetd configuration: [root@localhost mpenning]# cat /etc/xinetd.d/tftp # default: off # description: The tftp server serves files using the trivial file transfer \ # protocol. The tftp protocol is often used to boot diskless \ # workstations, download configuration files to network-aware printers, \ # and to start the installation process for some operating systems. service tftp { socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -s /opt/tftpboot disable = no per_source = 11 cps = 100 2 flags = IPv4 } [root@localhost mpenning]#

    Read the article

  • Missing libcurl.so.3 on updating tp PHP 5.2.13

    - by exentric
    Hi, I am trying to update my PHP to 5.2.13 however when I tried running yum update, it gives me this dependency error. php-5.2.13-jason.1.i386 from utterramblings has depsolving problems --> Missing Dependency: libcurl.so.3 is needed by package php-5.2.13-jason.1.i386 (utterramblings) Error: Missing Dependency: libcurl.so.3 is needed by package php-cli-5.2.13-jason.1.i386 (utterramblings) Error: Missing Dependency: libcurl.so.3 is needed by package php-5.2.13-jason.1.i386 (utterramblings) I believe this problem has been caused by my updating libcurl some time ago (to version 7.16.4-8.el5) but I have no idea how to solve this dependency issue. Some time ago my friend asked me regarding missing libcurl.so.3 as well on running some script. Can't say I remember what but he did say he managed to solved it (at least on his end) so I paid no attention to the libcurl.so.3 issue anymore. But now when I try to update my PHP, this problem arises again. This however does indeed exist (and presumably what solved my friend's issue): /usr/lib/libcurl.so.3 Any thoughts on this matter? I'm using centOS 5.3, PHP 5.2.11 and on LightTPD. -Regards

    Read the article

  • Cannot change PostgreSQL port

    - by Jerec TheSith
    I run Postgresql 8.4 as a service on a CentOS 6.2 server. I set port = 21444 and listen_addresses = '*' in /var/lib/pgsql/data/postgresql.conf and I changed 5432 to 21444 in postmaster.opts and restarted postgres, but when I run netstat -lntp postgresql is still running on port 5432 tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 20276/postmaster When I restart postgresql I get a writting error warning on /proc/self/oom_adj, but the service starts anyway. I read that we could get this error when using virtualized servers, but I don't really know if this has inpact on postgresql listening port. The correct pgsql config file is loaded in /var/lib/pgsql/data : [root@srv02 ~]# ps -ef | grep postgres root 1358 22140 0 09:42 pts/0 00:00:00 grep postgres postgres 9519 1 0 Mar16 ? 00:00:01 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data postgres 9573 9519 0 Mar16 ? 00:00:00 postgres: logger process postgres 9575 9519 0 Mar16 ? 00:00:05 postgres: writer process postgres 9576 9519 0 Mar16 ? 00:00:03 postgres: wal writer process postgres 9577 9519 0 Mar16 ? 00:00:01 postgres: autovacuum launcher process postgres 9578 9519 0 Mar16 ? 00:00:01 postgres: stats collector process any thought ? thanks, Jerec

    Read the article

  • Where to get glib-config for Kubuntu?

    - by Carl Smotricz
    I'm trying to compile Midnight Commander on a KUbuntu 9.10 (Karmic) box with no root access. I've set up a directory under $HOME, downloaded the mc source package and various stuff required for building, such as autotools. I've unpacked the CONTENTS of all those packages into this working directory such that I have the usual ./usr, ./lib, ./etc hierarchy. I manage to get configure through a lot of tests, but I can't seem to fool it into finding glib. checking for glib-2.0... checking for glib-config... no checking for glib12-config... no checking for glib-config... no checking for GLIB - version >= 1.2.6... no *** The glib-config script installed by GLIB could not be found *** If GLIB was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GLIB_CONFIG environment variable to the *** full path to glib-config. configure: error: Test for glib failed. GNU Midnight Commander requires glib 1.2.6 or above. My system has glib installed: /lib/libglib-2.0.so.0 /lib/libglib-2.0.so.0.2200.3 ... and I've also downloaded and unpacked the glib package into my working directory: libglib2.0-0_2.22.2-0ubuntu1_i386.deb libglib2.0-dev_2.22.2-0ubuntu1_i386.deb ... but still the elusive glib-config is nowhere to be found. It's not in any debian package for Karmic, either. So I'd appreciate any help getting over this hurdle. Please note, again, that I don't have root, so I can't just merrily apt-get stuff.

    Read the article

  • ssh connection operation timed out using rsync

    - by Mark Molina
    I use rsync to backup my remote server on my local device but when I combine it with a cron job my ssh times out. Just to be clear, the data is stored on a remote server and I want it stored on my local server. The backup request must be sent from my local server to the remote server. The command for backup up the data is working when I just type it in terminal like this: rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP but when I combine it with a cron job like this: 10 11 * * * rsync -chavzP --stats USERNAME@IP_ADDRESS: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP the ssh connection times out. When the cronjob executes it send a mail to the root user with the output like this: From local.xx.xx.xx Tue Jul 2 11:20:17 2013 X-Original-To: username Delivered-To: [email protected] From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <username@server> rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=username> X-Cron-Env: <USER=username> X-Cron-Env: <HOME=/Users/username> Date: Tue, 2 Jul 2013 11:20:17 +0200 (CEST) ssh: connect to host IP_ADDRESS port XX: Operation timed out rsync: connection unexpectedly closed (0 bytes received so far) [receiver] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-42/rsync/io.c(452) [receiver=2.6.9] So the rsync command is working when just typed in terminal but not when used by a cronjob. Can anybody explain this?

    Read the article

  • NGINX server_name issues

    - by Unai
    I have the following simple server block on NGINX: server { listen 80; listen 8090; server_name domain.com; autoindex on; root /home/docroot; location ~ \.php$ { include /usr/local/nginx/conf/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/docroot$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; } } After I include the relevant settings on my hosts file I get the following (unexpected) behavior: http: //domain.com/ and http: //domain.com:8090/ work fine; http: //domain.com:8090/future-cell-phone-technology-01-150x150.jpg works; http: //domain.com/future-cell-phone-technology-01-150x150.jpg - ERROR! "The connection was reset" (note.- added a space after http: to avoid link protection but this is not really promoting anything) I've been troubleshooting (3) for a couple hours and I'm unable to identify the culprit. I'm running NGINX 1.0.10 (latest stable) on Debian 6.0.2 32 bits. This NGINX instance runs another 40 or 50 sites with no problems.

    Read the article

  • How do I solve this error: ssh_exchange_identification: Connection closed by remote host

    - by bernie
    data@server01:~$ ssh [email protected] -vvv OpenSSH_5.5p1 Debian-6, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 10.7.4.1 [10.7.4.1] port 22. debug1: Connection established. debug3: Not a RSA1 key file /home/data/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/data/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/data/.ssh/id_rsa-cert type -1 debug1: identity file /home/data/.ssh/id_dsa type -1 debug1: identity file /home/data/.ssh/id_dsa-cert type -1 ssh_exchange_identification: Connection closed by remote host

    Read the article

  • my X server doesn't load a module called "glx", but my video drivers seem to be installed

    - by rumtscho
    I just got a new, very wide monitor (2560x1440) and there is no sense maximizing my applications. So I installed Compiz Config Settings Manager and enabled Grid. Nothing happened, the shortcuts don't move application windows. Went to System - Preferences - Appearance, the Visual effects tab. It at "disabled". When I try to set them to "normal" or "extra", a message box appears telling me that it's searching for video drivers, then disappears, and I get an error message "Desktop effects could not be enabled". I opened Xorg.0.log, and had errors there: (EE) Failed to load /usr/lib/xorg/modules/extensions//libglx.so (II) UnloadModule: "glx" (EE) Failed to load module "glx" (loader failed, 7) (II) LoadModule: "extmod" Going to Administration - System - Hardware drivers, it said that there are no available and/or installed hardware drivers. But apt-get said that it cannot install nvidia-glx-185, as it is already installed. Googling my error message suggested that I install and run something called envyng. This let me install the nvidia drivers again, and now I can see in the Hardware Drivers window that they are installed and active. But the error message in Xorg.0.log remains, and I still cannot enable the Compiz effects or use Grid. Now, I don't have enough Linux experience to understand if this is a single cause-effect-chain of problems, or three independent ones, but I'd appreciate help for any of them. I am running Ubuntu 9.10, the video card is a GeForce 7600GS.

    Read the article

  • APC uptime 0 because of Fast

    - by demlasjr
    I have a VPS using Parallels/Plesk (11.0.9 Update #22, last updated at Oct 31, 2012 03:33 AM CentOS 6.3 (Final) x86_64) I have apache (CGI/FastCGI) installed and nginx as reverse proxy. Everything is working just fine. I installed APC for caching, but the issue is that the uptime is 0 always. It's restarting each 15 seconds or so. I checked everywhere and can't find a solution to fix it. The server have the grace restart enabled, but every 6 hours, which shouldn't influence the APC uptime. Searching in Google I found that this could be related to Apache, running with FCGId instead of FastCGI. Plesk/Apache is using this config file: usr/local/psa/admin/conf/templates/default/service/php_over_fastcgi.php which content is: <IfModule mod_fcgid.c> <Files ~ (\.php)> SetHandler fcgid-script FCGIWrapper <?php echo $VAR->server->webserver->apache->phpCgiBin ?> .p$ Options +ExecCGI allow from all </Files> Is here the issue or elsewhere ? How can I fix this to work with FastCGI and make APC working properly. I forgot to specify that even if the uptime is below one minute, APC is doing pretty good job caching (92% are hits).

    Read the article

  • SQUID proxy - open FTP (and other ports)

    - by gaffcz
    elpeHow can I open other ports than HTTP and HTTPS using SQUID proxy? I have last version of squid running on Fedora 10 but I'm not able to open FTP port. part of my squid.conf: acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl ftp proto FTP acl ftp_port port 21 always_direct allow FTP acl SSL_ports port 443 20 21 22 acl Safe_ports port 20 # ftp acl Safe_ports port 21 # ftp acl Safe_ports port 22 # sftp acl Safe_ports port 80 # http acl Safe_ports port 280 # http-mgmt acl Safe_ports port 443 # https acl Safe_ports port 1025-65535 # uregistred ports acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager # USER privilegies (encoded in file passwd) auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd acl AUTHUSERS proxy_auth REQUIRED # BLACKLIST (in file denied.conf) acl denied_domains dstdomain "/etc/squid/DNDdomains.conf" acl denied_regex url_regex "/etc/squid/DNDregex.conf" http_access deny denied_regex http_access deny denied_domains http_access allow AUTHUSERS http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow ftp_port CONNECT http_access allow ftp http_access allow localhost http_access deny all #http_reply_access allow all #http_access allow all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid 10000 16 256 coredump_dir /var/spool/squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 I've tried to add: acl ftp proto FTP / acl ftp_port port 21 http_access allow ftp add/remove ports 20,21 from SSL_PORTS list set the iptables But nothing helped. It is even possible to use a new version of squid for FTP transfer?

    Read the article

  • Rebuild Fedora 19 ISO adding Kickstart for USB install

    - by dooffas
    I am attempting to edit a Fedora 19 DVD ISO to add a kickstart file. I then need this ISO burnt to a USB stick for instillation. The error I get when booting is Warning: Could not boot. Warning: /dev/root does not exist To try and determine which part of the process is failing I have broken the process down in to separate stages. Step 1: Burn the original ISO "Fedora-19-x86_64-DVD.iso" (Available - here) to a pendrive and see if that will install. dd if=/path/to/iso of=/dev/sdc Burning this image was successful and it installed without issue. Step 2: Exctract the ISO, repackage it and burn it to a pendrive and see if that will install. PLEASE NOTE: The final command in this section has been broken down in to multiple lines for ease of reading, in fact it was run as a single command on one line. mkdir -p /mnt/linux mount -o loop /tmp/linux-install.iso /mnt/linux cd /mnt/ tar -cvf - linux | (cd /var/tmp/ && tar -xf - ) cd /var/tmp/linux xorriso -as mkisofs -R -J -V "NewFedoraImage" -o ouput/file.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -isohybrid-mbr /usr/share/syslinux/isohdpfx.bin . This iso was then burnt to a pendrive as before. dd if=/path/to/iso of=/dev/sdc This ISO burnt to the pen drive with no problem and will boot. I then see the fedora options screen. After choosing either "Install Fedora 19" or "Test this media & install Fedora 19" I then receive the errors highlighted above. This means the kickstart file is not to blame, but repackaging the ISO. Is there something I am missing in the repackaging process? Any input would be great! NOTE: If it is of any help, I attempted Step 2 with an Ubuntu server ISO and the process was successful.

    Read the article

  • Puppet inventory service using puppetdb

    - by Oli
    I have 3 servers set up. A puppet master using passenger (puppet-server1), dashboard using passenger (puppet-server2) and puppetdb (puppet-server3). I cannot get the inventory service working in the dashboard. The puppet master is able to sign certs and hand out manifests. The nodes have checked in to the dashboard ok The puppetdb appears to be working - logs files as follows: 2012-12-13 17:53:10,899 INFO [command-proc-74] [puppetdb.command] [8490148f-865a-45c8-b5b5-2c8824d753dd] [replace facts] puppet-server3.test.net 2012-12-13 17:53:11,041 INFO [command-proc-74] [puppetdb.command] [dfcc5168-06df-41d4-9a97-77b4cd3f4a2b] [replace catalog] puppet-server3.test.net 2012-12-13 17:55:28,600 INFO [command-proc-74] [puppetdb.command] [b2cc0a96-0404-49f5-96ad-19c778508d3d] [replace facts] puppet-client2.test.net 2012-12-13 17:55:28,729 INFO [command-proc-74] [puppetdb.command] [4dc4b8f3-06df-4dad-a89a-92ac80447b99] [replace catalog] puppet-client2.test.net The puppet master has the following configured in puppet.conf [master] certname = puppet-server1.test.net storeconfigs = true storeconfigs_backend = puppetdb reports = store, http reporturl = http://puppet-server2.test.net/reports/upload The puppet master have the following configured in auth.conf #access for puppet dashboard facts path /facts auth yes method find, search allow dashboard The puppet dashboard has this configured in /usr/share/puppet-dashboard/config/settings.yml # Hostname of the inventory server. inventory_server: 'puppet-server3.test.net' # Port for the inventory server. inventory_port: 8081 The inventory is on as I see a link to the inventory in the dashboard server But I am getting this error: Inventory Could not retrieve facts from inventory service: SSL_connect SYSCALL returned=5 errno=0 state=SSLv3 read finished A clearly an SSL error - but I have followed the documentation and have no idea how to fix this. Can anyone help please? Oli

    Read the article

  • publickey authentication only works with existing ssh session

    - by aaron
    publickey authentication only works for me if I've already got one ssh session open. I am trying to log into a host running Ubuntu 10.10 desktop with publickey authentication, and it fails when I first log in: [me@my-laptop:~]$ ssh -vv host ... debug1: Next authentication method: publickey debug1: Offering public key: /Users/me/.ssh/id_rsa ... debug2: we did not send a packet, disable method debug1: Next authentication method: password me@hosts's password: And the /var/log/auth.log output: Jan 16 09:57:11 host sshd[1957]: reverse mapping checking getaddrinfo for cpe-70-114-155-20.austin.res.rr.com [70.114.155.20] failed - POSSIBLE BREAK-IN ATTEMPT! Jan 16 09:57:13 host sshd[1957]: pam_sm_authenticate: Called Jan 16 09:57:13 host sshd[1957]: pam_sm_authenticate: username = [astacy] Jan 16 09:57:13 host sshd[1959]: Passphrase file wrapped Jan 16 09:57:15 host sshd[1959]: Error attempting to add filename encryption key to user session keyring; rc = [1] Jan 16 09:57:15 host sshd[1957]: Accepted password for astacy from 70.114.155.20 port 42481 ssh2 Jan 16 09:57:15 host sshd[1957]: pam_unix(sshd:session): session opened for user astacy by (uid=0) Jan 16 09:57:20 host sudo: astacy : TTY=pts/0 ; PWD=/home/astacy ; USER=root ; COMMAND=/usr/bin/tail -f /var/log/auth.log The strange thing is that once I've got this first login session, I run the exact same ssh command, and publickey authentication works: [me@my-laptop:~]$ ssh -vv host ... debug1: Server accepts key: pkalg ssh-rsa blen 277 ... [me@host:~]$ And the /var/log/auth.log output is: Jan 16 09:59:11 host sshd[2061]: reverse mapping checking getaddrinfo for cpe-70-114-155-20.austin.res.rr.com [70.114.155.20] failed - POSSIBLE BREAK-IN ATTEMPT! Jan 16 09:59:11 host sshd[2061]: Accepted publickey for astacy from 70.114.155.20 port 39982 ssh2 Jan 16 09:59:11 host sshd[2061]: pam_unix(sshd:session): session opened for user astacy by (uid=0) What do I need to do to make publickey authentication work on the first login? NOTE: When I installed Ubuntu 10.10, I checked the 'encrypt home folder' option. I'm wondering if this has something to do with the log message "Error attempting to add filename encryption key to user session keyring"

    Read the article

  • when should be choose simple php mail and when smpt with loggin+password?

    - by user43353
    Hi, My Case: web application that need to send 1,000 messages per day to main gmail account. (Only need to send email, not need receive emails - email client) 1. option - use php mail function + sendmail + config php.ini php example: <?php $to = '[email protected]'; $subject = 'the subject'; $message = 'hello'; $headers = 'From: [email protected]' . "\r\n" . 'Reply-To: [email protected]' . "\r\n" . 'X-Mailer: PHP/' . phpversion(); mail($to, $subject, $message, $headers); ?> php.ini config (ubuntu): sendmail_path = /usr/sbin/sendmail -t -i pros:don't need email account, easy to setup cons:? 2. option - use Zend_Mail + transport on smpt+ password auto php example(need include Zend_Mail classes): $config = array('auth' => 'login', 'username' => 'myusername', 'password' => 'password'); $transport = new Zend_Mail_Transport_Smtp('mail.server.com', $config); $mail = new Zend_Mail(); $mail->setBodyText('This is the text of the mail.'); $mail->setFrom('[email protected]', 'Some Sender'); $mail->addTo('[email protected]', 'Some Recipient'); $mail->setSubject('TestSubject'); $mail->send($transport); pros:? cons:? Questions: Can 1 option be filtered by gmail email server as spam? please can you add pros + cons to options above Thanks

    Read the article

  • Rsync Push files from linux to windoes. ssh issue - connection refused

    - by piyush c
    For some reason I want to run a script to move files from Linux machine to Windows. I have installed cwRsync on my windows machine and able to connect to linux machine. When i execute following command: rsync -e "ssh -l "piyush"" -Wgovz --timeout 120 --delay-updates --remove-sent-files /usr/local/src/piyush/sync/* "[email protected]:/cygdrive/d/temp" Where 10.0.0.60 is my widows machine and I am running above command on Linux - CentOS 5.5. After running command I get following error message: ssh: connect to host 10.0.0.60 port 22: Connection refused rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(463) [sender=2.6.8] [root@localhost sync]# ssh [email protected] ssh: connect to host 10.0.0.60 port 22: Connection refused I have modified my firewall settings on widows to allow all ports. I think this issue is due to SSH Daemon not present on my windows machine. So I tried installing OpenSSH on my machine and running ssh-agent but didn't helped. I tried similar command to run on my widows machine to pull files from Linux and its working fine. For some reason I want command for Linux machine so that I can embed it in a shell script. Can you suggest me if I am missing anything.

    Read the article

  • What is the difference between the Linux and Linux LVM partition type?

    - by ujjain
    Fdisk shows multiple partition types. What is the difference between choosing 83) Linux and 8e) Linux LVM? Choosing 83) Linux also works fine for using LVM, even creating a physical volume on /dev/sdb without a partition table works. Does picking a partition type in fdisk really matter? What is the difference in picking Linux or Linux LVM as partition type? [root@tst-01 ~]# fdisk /dev/sdb WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): l 0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris 1 FAT12 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT- 2 XENIX root 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT- 3 XENIX usr 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT- 4 FAT16 <32M 41 PPC PReP Boot 85 Linux extended c7 Syrinx 5 Extended 42 SFS 86 NTFS volume set da Non-FS data 6 FAT16 4d QNX4.x 87 NTFS volume set db CP/M / CTOS / . 7 HPFS/NTFS 4e QNX4.x 2nd part 88 Linux plaintext de Dell Utility 8 AIX 4f QNX4.x 3rd part 8e Linux LVM df BootIt 9 AIX bootable 50 OnTrack DM 93 Amoeba e1 DOS access a OS/2 Boot Manag 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O b W95 FAT32 52 CP/M 9f BSD/OS e4 SpeedStor c W95 FAT32 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs e W95 FAT16 (LBA) 54 OnTrackDM6 a5 FreeBSD ee GPT f W95 Ext'd (LBA) 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/ 10 OPUS 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b 11 Hidden FAT12 5c Priam Edisk a8 Darwin UFS f1 SpeedStor 12 Compaq diagnost 61 SpeedStor a9 NetBSD f4 SpeedStor 14 Hidden FAT16 <3 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary 16 Hidden FAT16 64 Novell Netware af HFS / HFS+ fb VMware VMFS 17 Hidden HPFS/NTF 65 Novell Netware b7 BSDI fs fc VMware VMKCORE 18 AST SmartSleep 70 DiskSecure Mult b8 BSDI swap fd Linux raid auto 1b Hidden W95 FAT3 75 PC/IX bb Boot Wizard hid fe LANstep 1c Hidden W95 FAT3 80 Old Minix be Solaris boot ff BBT 1e Hidden W95 FAT1 Command (m for help):

    Read the article

  • eAccelerator Issue - Cache Directory Empty.

    - by Tom
    Hi all, Hoping someone can give me a hand with this. I've recently installated eAccelerator 0.9.6.1 - On a CentOS LAMP server. Had it working fine, using the /tmp/accelerator as the cache directory. php.ini set up: zend_extension="/usr/local/lib/php/extensions/no-debug-non-zts-20060613/eaccelerator.so" eaccelerator.shm_size="200" eaccelerator.cache_dir="/var/cache/eaccelerator" eaccelerator.enable="1" eaccelerator.optimizer="1" eaccelerator.check_mtime="1" eaccelerator.debug="0" eaccelerator.filter="" eaccelerator.shm_max="0" eaccelerator.shm_ttl="3600" eaccelerator.shm_prune_period="180" eaccelerator.shm_only="1" eaccelerator.compress="1" eaccelerator.compress_level="9" php -v output: PHP 5.2.12 (cli) (built: Feb 3 2010 00:34:28) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies with eAccelerator v0.9.6.1, Copyright (c) 2004-2010 eAccelerator, by eAccelerator with the ionCube PHP Loader v3.3.20, Copyright (c) 2002-2010, by ionCube Ltd. I had to remove the cache directory as I was testing something. Remade it, re-set permissions and found that eAccelerator was no longer creating cache files within the folder. I thought it might be down to ownership rights on the folder so chown'd it apache.apache and this made no difference. I recreated the directory in /var/cache instead and editted php.ini to point to the new cache dir location, chmod'd, chown'd etc. and still eAccelerator is not creating any of the cache files in the directory (just empty). Could someone suggest what I might be doing incorrectly here. I've read through numerous pages to try and troubleshoot the issue to no avail. Any help appreciated.

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • Almost every Inkscape extension yields an error in Mac OS X

    - by andyvn22
    I've run the latest few versions of Inkscape (currently landed on "0.47+devel"), and have been having trouble with the Extensions menu. So far, in every version of Inkscape I've tried, nearly every extension yields the following error: The fantastic lxml wrapper for libxml2 is required by inkex.py and therefore this extension. Please download and install the latest version from http://cheeseshop.python.org/pypi/lxml/, or install it through your package manager by a command like: sudo apt-get install python-lxml I've tried the instructions listed there, of course, with no effect. I've also found many references to this issue on fora, in bug trackers, etc., and as such also tried: sudo easy_install lxml cd /Applications/Inkscape.app/Contents/Resources/lib mv libxml2.2.dylib libxml2.2.dylib.old ln -s /usr/lib/libxml2.dylib and a few similar solutions. Nothing has produced any change in Inkscape's behavior. Does anyone know A) what's really going on here? Because from what I gather the error is not describing the actual problem. And of course B) a simple solution? I need those features! :)

    Read the article

  • Tripwire help Required

    - by ramaperumal
    I have created the policy file in Tripwire and also I have created the rules as well mentioned below: /opt/jboss/server/gis/conf -> $(SEC_CONFIG) +aipm +c+g+a+i+s+t+u+l+M; /usr/local/gtech/eseries/ -> $(SEC_CONFIG) +a+c+g+i+s+t+u+l+M ; After running the integrity check the output should be a(Access timestamp),c (Inode timestamp (create/modify),g (File owner's group ID),i (Inode number),s (File size),t (time stamp),u (File owner's user ID),l(File is increasing in size (a "growing file"),M (MD5 hash value). I am getting the output as below: [root@xxsi1242 tripwire]# tripwire --check Parsing policy file: /etc/tripwire/tw.pol *** Processing Unix File System *** Performing integrity check... Wrote report file: /var/lib/tripwire/report/xxsi1242.gtk.gtech.com-20131106-053812.twr Open Source Tripwire(R) 2.4.1 Integrity Check Report Report generated by: root Report created on: Wed 06 Nov 2013 05:38:12 AM EST Database last updated on: Wed 06 Nov 2013 05:31:17 AM EST =============================================================================== Report Summary: =============================================================================== Host name: xxsi1242.gtk.gtech.com Host IP address: 156.24.65.171 Host ID: None Policy file used: /etc/tripwire/tw.pol Configuration file used: /etc/tripwire/tw.cfg Database file used: /var/lib/tripwire/xxsi1242.gtk.gtech.com.twd Command line used: tripwire --check =============================================================================== Rule Summary: =============================================================================== ------------------------------------------------------------------------------- Section: Unix File System ------------------------------------------------------------------------------- Rule Name Severity Level Added Removed Modified --------- -------------- ----- ------- -------- Invariant Directories 66 0 0 0 Temporary directories 33 0 0 0 * Tripwire Data Files 100 0 0 1 Tech Stack 100 0 0 0 User binaries 66 0 0 0 Tripwire Binaries 100 0 0 0 * CLPS bins 100 0 0 2 CLPS Configuration files 100 0 0 0 ESCommon 100 0 0 0 Shell Binaries 100 0 0 0 OS executables and libraries 100 0 0 0 Security Control 100 0 0 0 ESCommon Configuration 100 0 0 0 (/etc/gtech/escommon) Total objects scanned: 12358 Total violations found: 3 =============================================================================== Object Summary: =============================================================================== ------------------------------------------------------------------------------- # Section: Unix File System ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- Rule Name: Tripwire Data Files (/etc/tripwire/tw.pol) Severity Level: 100 ------------------------------------------------------------------------------- Modified: "/etc/tripwire/tw.pol" ------------------------------------------------------------------------------- Rule Name: CLPS bins (/opt/jboss/server) Severity Level: 100 ------------------------------------------------------------------------------- Modified: "/opt/jboss/server/esapps1/data/hypersonic/localDB.lck" "/opt/jboss/server/gis/data/hypersonic/localDB.lck" =============================================================================== Error Report: =============================================================================== No Errors ------------------------------------------------------------------------------- *** End of report *** Note: In the output I only am getting the files which are modified. I need the detail output for this. But unfortunately I am not getting what I expected. Please help me to proced further.

    Read the article

  • Mac updated just now, postgres now broken

    - by Dave
    I run postgres 9.1 / ruby 1.9.2 / rails 3.1.0 on a maxbook air for local dev. It's all been running smoothly for months, (though this is the first time I've done development on a mac.) It's a macbook air from last year, and today I got the mac osx software update message as I have a few times before, and my system downloaded approx 450mb of updates and restarted. It now says it's on OSX 10.7.3. Point is, postgres has stopped working, when I start my thin server (mirror heroku cedar) as normal, and then browse to my rails app I get: PG::Error could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What happened? After browsing around a few questions I'm still confused, but here's some extra info: Running psql from command line gives same error I can run pgadmin 3 and connect via it and run SQL no problems Running which psql shows the version as /usr/bin/psql I created a PostgreSQL user back when I got the mac (it's always been on lion) I've no idea why, almost certainly I was following a tutorial which I neglected to store in my notes. Point is I am aware there is a _postgres user as well. I know it's rubbish, but apart from a note on passwords, I don't have any extra info on how I configured postgres - though the obvious implication is that I did not use the _postgres user. Anyone have suggestions or information on what might have changed / what I can try to debug and fix? Thanks. Edit: Playing around based on this question and answer: http://stackoverflow.com/questions/7975414/check-status-of-postgresql-server-mac-os-x, see this string of commands: $ sudo su postgreSQL bash-3.2$ /Library/PostgreSQL/9.1/bin/pg_ctl start -D /Library/PostgreSQL/9.1/data pg_ctl: another server might be running; trying to start server anyway server starting bash-3.2$ 2012-04-08 19:03:39 GMT FATAL: lock file "postmaster.pid" already exists 2012-04-08 19:03:39 GMT HINT: Is another postmaster (PID 68) running in data directory "/Library/PostgreSQL/9.1/data"? bash-3.2$ exit

    Read the article

< Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >