Search Results

Search found 7807 results on 313 pages for 'ricardo h bin'.

Page 251/313 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • Corrupt install of Ruby?

    - by wilhelmtell
    I'm having issues requiring 'digest/sha1'. ~$ ./configure --prefix=$HOME/usr --program-suffix=19 --enable-shared ~$ make ~$ make install ~$ irb19 irb(main):001:0> require 'digest/sha1' LoadError: dlopen(/Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle, 9): Symbol not found: _rb_Digest_SHA1_Finish Referenced from: /Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle Expected in: flat namespace - /Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle from (irb):1:in `require' from (irb):1 from /Users/matan/usr/bin/irb19:12:in `<main>' irb(main):002:0> I know some standard modules require fine, while others don't. If i'd say require 'digest' then that works fine. Just a couple of days ago I uninstalled Ruby and re-installed it. When I first installed Ruby I installed it in a similar manner, from source prefixed at my local $HOME/usr directory. I tried removing each and every file make install installs, then re-installing, but that didn't help. Do you have an idea what the issue is and how to resolve it?

    Read the article

  • Mailman delivery troubles

    - by stanigator
    I'm not sure if this is a good place to ask this question. It's about mailing list management software called Mailman from GNU. Here are the details: Hosting provider: Vlexofree Domain: www.sysil.com with Google Apps Mailing List created from hosting cpanel: [email protected] I have registered a list of subscribers, and tried sending an email to [email protected]. I got the following error message: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://mail.google.com/support/bin/answer.py?answer=6596 23si6479194ewy.44 (state 14). ----- Original message ----- MIME-Version: 1.0 Received: by 10.216.90.136 with SMTP id e8mr1469147wef.110.1264220118960; Fri, 22 Jan 2010 20:15:18 -0800 (PST) Date: Fri, 22 Jan 2010 20:15:18 -0800 Message-ID: <[email protected]> Subject: From: Stanley Lee <[email protected]> To: [email protected] Content-Type: multipart/alternative; boundary=0016e6dab0931bccc3047dcd2f1e - Show quoted text - Is there any way of fixing this problem? I would like to be able to have this mailing list to work through my hosting and domain. Thanks in advance.

    Read the article

  • Ubuntu init.d script not being called on startup

    - by Mike
    I've got a script in ubuntu 9.04 in init.d that I've set to run on start on with update-rc.d using update-rc.d init_test defaults 99. All of the symlinks are there and the permissions appear to be correct -rwxr-xr-x 1 root root 642 2010-10-28 16:44 init_test mike@xxxxxxxxxx:~$ find /etc -name S99* | grep init_test find: /etc/rc5.d/S99init_test find: /etc/rc4.d/S99init_test find: /etc/rc2.d/S99init_test find: /etc/rc3.d/S99init_test The script runs through source and ./ without issue and behaves correctly. Here is the source of the script: #!/bin/bash ### BEGIN INIT INFO # Provides: init test script # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO start() { echo "hi" echo "start called" >> /tmp/test.log return } stop() { echo "Stopping" } echo "Script called" >> /tmp/test.log case "$1" in start) start ;; stop) stop ;; *) echo "Usage: {start|stop|restart}" exit 1 ;; esac exit $? When the machine starts, I don't see "script called" or "start called" in the test.log at all. Is there anything obvious I'm messing up?

    Read the article

  • Compiling PHP with GD and libjpeg support

    - by Robin Winslow
    I compile my own PHP, partly to learn more about how PHP is put together, and partly because I'm always finding I need modules that aren't available by default, and this way I have control over that. My problem is that I can't get JPEG support in PHP. Using CentOS 5.6. Here are my configuration options when compiling PHP 5.3.8: './configure' '--enable-fpm' '--enable-mbstring' '--with-mysql' '--with-mysqli' '--with-gd' '--with-curl' '--with-mcrypt' '--with-zlib' '--with-pear' '--with-gmp' '--with-xsl' '--enable-zip' '--disable-fileinfo' '--with-jpeg-dir=/usr/lib/' The ./configure output says: checking for GD support... yes checking for the location of libjpeg... no checking for the location of libpng... no checking for the location of libXpm... no And then we can see that GD is installed, but that JPEG support isn't there: # php -r 'print_r(gd_info());' Array ( [GD Version] => bundled (2.0.34 compatible) [FreeType Support] => [T1Lib Support] => [GIF Read Support] => 1 [GIF Create Support] => 1 [JPEG Support] => [PNG Support] => 1 [WBMP Support] => 1 [XPM Support] => [XBM Support] => 1 [JIS-mapped Japanese Font Support] => ) I know that PHP needs to be able to find libjpeg, and it obviously can't find a version it's happy with. I would have thought /usr/lib/libjpeg.so or /usr/lib/libjpeg.so.62 would be what it needs, but I supplied it with the correct lib directory (--with-jpeg-dir=/usr/lib/) and it doesn't pick them up so I guess they can't be the right versions. rpm says libjpeg is installed. Should I yum remove and reinstall it, and all it's dependent packages? Might that fix the problem? Here's a paste bin with a collection of hopefully useful system information: http://pastebin.com/ied0kPR6

    Read the article

  • Linux commands shows different results

    - by ClydeFrog
    I'm really having a hard time to process these results on my Ubuntu server. I have a major problem with my JBoss server where I get FileNotFoundExceptions along with "No space left on device" errors. And I thought "maybe I'm out of disk space", and used df command to figure out how much I have left: root@ubuntu1:/# df -h Filsystem Storlek Anvnt Tillg Anv% Monterat på /dev/mapper/ubuntu1-root 36G 13G 21G 38% / none 2,0G 192K 2,0G 1% /dev none 2,0G 0 2,0G 0% /dev/shm none 2,0G 64K 2,0G 1% /var/run none 2,0G 0 2,0G 0% /var/lock /dev/sda1 228M 23M 193M 11% /boot /dev/mapper/vgdata-lvdata 79G 9,2G 66G 13% /data And as you can see, I have plenty of space left. And I also checked if I'm out of i-nodes: root@ubuntu1:/# df -i Filsystem Inoder IAnv IFria IAnv% Monterat på /dev/mapper/ubuntu1-root 2346512 61992 2284520 3% / none 505380 773 504607 1% /dev none 507383 1 507382 1% /dev/shm none 507383 30 507353 1% /var/run none 507383 2 507381 1% /var/lock /dev/sda1 124496 230 124266 1% /boot /dev/mapper/vgdata-lvdata 10486784 233945 10252839 3% /data But then i used du: root@ubuntu1:/# du -s -h /* 7,5M /bin 23M /boot 19G /data 192K /dev 11G /eniro 5,3M /etc 112K /home 0 /initrd.img 183M /lib 0 /lib64 16K /lost+found 12K /media 4,0K /mnt 4,0K /opt du: kan inte komma åt "/proc/20452/task/20452/fd/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/task/20452/fdinfo/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/fd/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/fdinfo/3": Filen eller katalogen finns inte 0 /proc 18M /root 8,2M /sbin 4,0K /selinux 8,0K /srv 0 /sys 40K /tmp 691M /usr 1,2G /var 0 /vmlinuz Notice that /data and /eniro are 30G combined! How is it possible? Do I have a memory leak somewhere? Or is it something else? ----- EDIT 1 ----- Ok, I figured out that /data has its own mount so it's not possible to combine /data and /eniro because they aren't on the same mount. But how come it says 9,2G on the first command when it says 19G on the third on directory /data?

    Read the article

  • How to get automatic upgrades to work on Ubuntu Server?

    - by J. Pablo Fernández
    I followed the documentation for enabling automatic upgrades in Ubuntu servers, but it's not really updating anything at all. My /etc/apt/apt.conf.d/50unattended-upgrades looks almost like the default. // Automatically upgrade packages from these (origin, archive) pairs Unattended-Upgrade::Allowed-Origins { "Ubuntu karmic-security"; "Ubuntu karmic-updates"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. The package 'mailx' // must be installed or anything that provides /usr/bin/mail. Unattended-Upgrade::Mail "[email protected]"; // Automatically reboot *WITHOUT CONFIRMATION* if a // the file /var/run/reboot-required is found after the upgrade //Unattended-Upgrade::Automatic-Reboot "false"; The directory /var/log/unattended-upgrades/ is empty. Running /etc/init.d/unattended-upgrades start is not very nice: root@mozart:~# /etc/init.d/unattended-upgrades start Checking for running unattended-upgrades: root@mozart:~# Something seems to be broken, but I'm not sure why. I have pending updates and they are not being applied: root@mozart:~# aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following packages will be upgraded: linux-libc-dev 1 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/743kB of archives. After unpacking 4096B will be used. Do you want to continue? [Y/n/?] In all the servers I have, unattended upgrades seems to have been disabled: root@mozart:~# apt-config shell UnattendedUpgradeInterval APT::Periodic::Unattended-Upgrade root@mozart:~# Any ideas what am I missing?

    Read the article

  • Not able to run firefox in a head-less Ubuntu server 9.10

    - by Julio J.
    I need to run Firefox in my server in order to execute some Selenium tests from Hudson. I would love no to have to install a complete gui. So I installed Xvfb in order to fake the Gui (I understand it this way correct me if my assumptions are wrong). After some time trying to make it work, I'm stuck with the next situation: $ sudo Xvfb -ac :99 & [dix] Could not init font path element /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, removing from list! (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) $ firefox [dix] Could not init font path element /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, removing from list! [config/dbus] couldn't register object path (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) Xlib: extension "RANDR" missing on display ":99.0". GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: /bin/dbus-launch terminated abnormally without any error message) I'm runnig firefox without installing it from the repositories. And I'm getting a socket timeout when I try to run the selenium tests, so I guess the problem is in Firefox and Xvfb. I have installed already the nex package: i gconf-defaults-service - GNOME configuration database system (system defaults service) That in some forums suggest to be a fix, that in my case doesn't work. Any explanation about the problem and ways of solving it without installing a full gui, will be very helpful.

    Read the article

  • Jenkins CI - Cannot allocate memory

    - by Programmieraffe
    I tested jenkins-ci successfully on a ubuntu 10.4 (with vmware fusion) on my local computer. Now I want to install and use it on my virtual server at hosteurope. The basic installation was no problem, but now I have problems with my build project. After pulling an mercurial update from a repository, ant is invoked and throws the following error in my build project: "Buildfile: /var/lib/jenkins/workspace/concrete5-seed-clean/build.xml [property] java.io.IOException: Cannot run program "/usr/bin/env": java.io.IOException: error=12, Cannot allocate memory" There is a known problem with heap size at virtual servers at hosteurope (http://faq.hosteurope.de/index.php?cpid=13918), so I tried to set the heap size manually: # for ant export ANT_OPTS="-Xms512m -Xmx512m" # jenkins # edited /etc/default/jenkins, added line JAVA_ARGS="-Xms512m -Xmx512m" # restarted jenkins via /etc/init.d/jenkins restart After setting this for ant, the command "ant -diagnostics" runs through and does not cause an error, but the error still occurs when I try to build the project. Server-Details: - http://www.hosteurope.de/produkt/Virtual-Server-Linux-L Ubuntu 10.4 LTS RAM: 1GB / Dynamic 2GB My questions: - Is 1GB enough for Jenkins or do I have to upgrade the server? - Is this error caused by ant or jenkins? Update: I got it running with ant options -Xmx128m -Xms128m, but sometimes the error occurs again. (this freaks me out, cause i can not reproduce it by now :/ ) Help much appreciated! Cheers, Matthias

    Read the article

  • nmap installation issue

    - by daasf
    vanilla centos with latest updates, installed gcc, and after ./configure:.... Configuration complete. Type make (or gmake on some *BSD machines) to compile. [root@winxp nmap-5.51]# make Makefile:375: makefile.dep: No such file or directory g++ -MM -I./liblua -I./libdnet-stripped/include -I./libpcre -I./libpcap -I./nbase - I./nsock/include -DHAVE_CONFIG_H -DNMAP_NAME=\"Nmap\" -DNMAP_URL=\"http://nmap.org\" - DNMAP_PLATFORM=\"x86_64-unknown-linux-gnu\" -DNMAPDATADIR=\"/usr/local/share/nmap\" - D_FORTIFY_SOURCE=2 main.cc nmap.cc targets.cc tcpip.cc nmap_error.cc utils.cc idle_scan.cc osscan.cc osscan2.cc output.cc payload.cc scan_engine.cc timing.cc charpool.cc services.cc protocols.cc nmap_rpc.cc portlist.cc NmapOps.cc TargetGroup.cc Target.cc FingerPrintResults.cc service_scan.cc NmapOutputTable.cc MACLookup.cc nmap_tty.cc nmap_dns.cc traceroute.cc portreasons.cc xml.cc nse_main.cc nse_utility.cc nse_nsock.cc nse_dnet.cc nse_fs.cc nse_nmaplib.cc nse_debug.cc nse_pcrelib.cc nse_binlib.cc nse_bit.cc > makefile.dep /bin/sh: g++: command not found make: *** [makefile.dep] Error 127 [root@winxp nmap-5.51]# yum install g++ -y Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.ash.fastserv.com * base: centos.mirror.choopa.net * extras: mirror.trouble-free.net * updates: mirror.nexcess.net Setting up Install Process No package g++ available. Nothing to do [root@winxp nmap-5.51]#

    Read the article

  • MySQL Config on Large Machine

    - by Jonathon
    We have a Windows 2003 Enterprise Edition server (64bit) running only MySQL 5.1.45 64-bit. It has 16G RAM and 10T of hard-drive space in RAID 10. We are having horrible performance from mysqld (85-100% CPU utilization). We were running a smaller machine with better performance, so I am assuming our my.ini file is not correct for our current machine. The my.ini file is as follows: [client] port=3306 [mysql] default-character-set=latin1 [mysqld] port=3306 basedir="D:/MySQL/" datadir="D:/MySQL/data" default-character-set=latin1 default-storage-engine=MYISAM sql-mode="" skip-innodb skip-locking max_allowed_packet = 1M max_connections=800 myisam_max_sort_file_size=5G myisam_sort_buffer_size=500M table_open_cache = 512 table_cache=8000 tmp_table_size=30M query_cache_size=50M thread_cache_size=128 key_buffer_size=3072M read_buffer_size=2M read_rnd_buffer_size=16M sort_buffer_size=2M #replication settings (this is the master) log-bin=log server-id = 1 Does anyone see anything wrong with this setup? For a machine with this much RAM, why in the world would mysqld eat up so much CPU? I know we can optimize some queries, etc., but it did run okay on a smaller machine, so I am pretty sure it is the config. Thanks in advance for any help.

    Read the article

  • systemd: enabling cherokee service as a `unit file`

    - by Calvin Cheng
    So I am learning how to use systemd to initialize my services automatically on server reboot. So of course, I first make sure I have systemd and some optional systemd related packages installed. pacman -S systemd initscripts-systemd Installation seems to go well and checking, I can see that systemd and its dependency libsystemd are installed. And the optional package initscripts-systemd is also installed:- [root@li280-195 ~]# pacman -Ss systemd extra/libsystemd 44-5 [installed] systemd client libraries extra/systemd 44-5 [installed] system and service manager extra/systemd-sysvcompat 2-2 sysvinit compat symlinks for systemd community/initscripts-systemd 20120412-1 [installed] Arch specific systemd initialization/bootup scripts for systemd community/systemd-arch-units 20120412-2 Arch specific Systemd unit files Next, I ensure that systemd is loaded up when my server reboots, via grub in grub's /boot/grub/menu.lst file like this:- kernel /boot/vmlinuz root=/dev/xvda ro init=/bin/systemd Rebooting my server to check, all loads up well and I can check that systemd is operational via:- systemctl list-unit-files However, I don't see my cherokee initialization script (which is simply created at /etc/rc.d/cherokee when I installed cherokee earlier via pacman -S cherokee) being listed as one of my unit files. So the question is, how do I do that? How do I put my cherokee initialization script under systemd's control?

    Read the article

  • libsasl2 change paths

    - by mk_89
    I have been following the tutorial https://help.ubuntu.com/community/Postfix for installing Postfix on ubuntu. Im stuck at the Authenication section of the tutorial where you change paths to live in the false root, if you look at the link above I have a file (/etc/default/saslauthd) which is pretty much the same as the one from the tutorial. saslauthd # This needs to be uncommented before saslauthd will be run automatically START=yes PWDIR="/var/spool/postfix/var/run/saslauthd" PARAMS="-m ${PWDIR}" PIDFILE="${PWDIR}/saslauthd.pid" # You must specify the authentication mechanisms you wish to use. # This defaults to "pam" for PAM support, but may also include # "shadow" or "sasldb", like this: # MECHANISMS="pam shadow" MECHANISMS="pam" # Other options (default: -c) # See the saslauthd man page for information about these options. # # Example for postfix users: "-c -m /var/spool/postfix/var/run/saslauthd" # Note: See /usr/share/doc/sasl2-bin/README.Debian #OPTIONS="-c" #make sure you set the options here otherwise it ignores params above and will not work OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd" When I run the following command in ubuntu dpkg-statoverride --force --update --add root sasl 755 /var/spool/postfix/var/run/saslauthd I get the following error dpkg-statoverride: warning: An override for '/var/spool/postfix/var/run/saslauthd' already exists, but --force specified so will be ignored. dpkg-statoverride: warning: --update given but /var/spool/postfix/var/run/saslauthd does not exist I don't why this is happening, I literally followed the tutorial step by step and have installed all the packages necessary, what could be the problem? do I have to manually create

    Read the article

  • Thumbnail generation with imagemagick doesn't render the correct colors

    - by Bastien
    Generating thumbnails of PDFs with imagemagick sometimes renders incorrect colors. We're using an old version of imagemagick (6.5.7-8, that's the version installed on the heroku servers). Here is the command we're currently using: convert -size "725x1200>" -colorspace RGB -flatten -density 300 -quality 100 input.pdf output.jpg I've tried using different colorspaces like sRGB,YIQ,.. but none of them are rendering the color correctly. Using imagemagick-6.7.7-6 locally works so I've tried to bundle the 'convert' command within my application /bin directory, the command works but the result is still wrong, so it seems that the problem comes either from another imagemagick command used by 'convert' or from another library. Here's an example of the outputs: Correct output: http://i.stack.imgur.com/gf9eG.jpg Wrong output: http://i.stack.imgur.com/imUeD.jpg Strangely, with some pages of the same pdf the output is always correct. Any idea which library or command could be the issue, or if there is a proper set of options to pass to imagemagick to always get it right? Thanks in advance for your help.

    Read the article

  • Help setting up an secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    Hi, We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • Why does this preseed for gitolite fail?

    - by troutwine
    I'm installing gitolite on a Debian Squeeze box with the following preseed: gitolite gitolite/gituser string git gitolite gitolite/adminkey string ssh-rsa AAAAB3ECT gitolite gitolite/gitdir string /var/lib/git On installation: # debconf-set-selections /var/cache/debconf/gitolite.preseed # apt-get install gitolite Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: git-daemon-run gitweb The following NEW packages will be installed: gitolite 0 upgraded, 1 newly installed, 0 to remove and 26 not upgraded. Need to get 0 B/114 kB of archives. After this operation, 348 kB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package gitolite. (Reading database ... 24715 files and directories currently installed.) Unpacking gitolite (from .../gitolite_1.5.4-2+squeeze1_all.deb) ... Setting up gitolite (1.5.4-2+squeeze1) ... adduser: The home dir must be an absolute path. dpkg: error processing gitolite (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: gitolite E: Sub-process /usr/bin/dpkg returned an error code (1) Why? The pre-seed was extracted from a manually configured installation, per here and exists without issue on another machine.

    Read the article

  • Cannot start Postgres daemon after installing with Yum

    - by Sean the Bean
    I was trying to install Postgres 9.1.4 on Fedora 17 using Yum. If I do: sudo yum install postgres-libs sudo yum install postgres sudo yum install postgis All the installs appear to complete successfully (i.e., no errors), but I cannot start the Postgres daemon using: service postgresql initdb Like the official Postgres download guide says to do (http://www.postgresql.org/download/linux/redhat/). The error says Unknown operation initdb. RPM tells me that it installed psql to /usr/bin/, which I confirmed. It turns out that only a few components installed correctly (psql, pg_dump, pg_configure, and a few others), but most are missing (e.g., pg_ctl and postgres). I've tried several different configurations and had several of my coworkers (with more linux experience than me) look at it, but so far nothing has worked. Two of them have also run into similar issues installing Postgres using apt-get on Ubuntu, which makes me think the rpm isn't doing its job. It seems the only solution to build it from source, which is more robust anyway, but of course it takes longer. I'm wondering, though, if anyone else has run into this issue and/or has successfully installed Postgres on either Fedora or Ubuntu using a package manager like yum or apt-get? Is the rpm broken?

    Read the article

  • How do I ensure a process is running, even if it kills itself? (it needs to be restarted then)

    - by le_me
    I'm using linux. I want a process (an irc bot) to run every time I start the computer. But I've got a problem: The network is bad and it disconnects often, so I need to manually restart the bot a few times a day. How do I automate that? Additional information: The bot creates a pid file, called bot.pid The bot reconnects itself, but only a few times. The network is too bad, so the bot kills itself sometimes because it gets no response. What I do currently (aka my approach ;) ) I have a cron job executing startbot.rb every 5 minutes. (The script itself is in the same directory as the bot) The script: #!/usr/bin/ruby require 'fileutils' if File.exists?(File.expand_path('tmp/bot.pid')) @pid = File.read(File.expand_path('tmp/bot.pid')).chomp!.to_i begin raise "ouch" if Process.kill(0, @pid) != 1 rescue puts "Removing abandoned pid file" FileUtils.rm(File.expand_path('tmp/bot.pid')) puts "Starting the bot!" Kernel.exec(File.expand_path('./bot.rb')) else puts "Bot up and running!" end else puts "Starting the bot!" Kernel.exec(File.expand_path('./bot.rb')) end What this does: It checks if the pid file exists, if that's true it checks if kill -s 0 BOT_PID == 1 (if the bot's running) and starts the bot if one of the two checks fail/are not true. My approach seems to be quite dirty so how do I do it better?

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • Vagrant doesn't detect chef-solo unless re-installed

    - by nightowl
    I am using Vagrant to test my Chef recipes in Amazon AWS, and I am encountering an irritating issue: I initially assumed that Vagrant would install chef itself (as it does when using Virtual Box as the provider) but it seems that this needs to be done using the cloud-init script. However, even after I successfully installed the chef gem via cloud-init I was still getting the following error: The chef binary (eitherchef-soloorchef-client) was not found A quick google of this error suggested three probable causes: Chef had failed to install It had installed, but the directory was not in the $PATH environment variable It had installed and in the $PATH but with incorrect permissions I logged in and double checked; chef-solo and chef-client were installed; The path variable for the user, sudo and root all included /usr/local/bin and permissions were all fine. I managed to solve this problem by uninstalling and reinstalling the gem using sudo gem install chef. I don't understand why this should resolve the issue and it is a bit of a problem if I have to ssh into a test box and manually install the gem every time. Does anyone have any suggestions why this might be happening?

    Read the article

  • What steps to take when CPAN installation fails?

    - by pythonic metaphor
    I have used CPAN to install perl modules on quite a few occasions, but I've been lucky enough to just have it work. Unfortunately, I was trying to install Thread::Pool today and one of the required dependencies, Thread::Converyor::Monitored failed the test: Test Summary Report ------------------- t/Conveyor-Monitored02.t (Wstat: 65280 Tests: 89 Failed: 0) Non-zero exit status: 255 Parse errors: Tests out of sequence. Found (2) but expected (4) Tests out of sequence. Found (4) but expected (5) Tests out of sequence. Found (5) but expected (6) Tests out of sequence. Found (3) but expected (7) Tests out of sequence. Found (6) but expected (8) Displayed the first 5 of 86 TAP syntax errors. Re-run prove with the -p option to see them all. Files=3, Tests=258, 6 wallclock secs ( 0.07 usr 0.03 sys + 4.04 cusr 1.25 csys = 5.39 CPU) Result: FAIL Failed 1/3 test programs. 0/258 subtests failed. make: *** [test_dynamic] Error 255 ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz /usr/bin/make test -- NOT OK //hint// to see the cpan-testers results for installing this module, try: reports ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz Running make install make test had returned bad status, won't install without force Failed during this command: ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz: make_test NO What steps do you take to start seeing why an installation failed? I'm not even sure how to begin tracking down what's wrong.

    Read the article

  • Permission issue for apache

    - by Aamir Adnan
    Environment Details: Amazon Ec2 Ubuntu 12.04 Django + mod_wsgi + python 2.6 web server: apache2 I have mounted a 10GB ebs volume to an instance to /mnt/ebs1/. After mounting the volume and formatting, I have placed all my project files in /mnt/ebs1/project. the wsgi file is in /mnt/ebs1/project/apache/django.wsgi. The content of wsgi file is: import os, sys sys.path.insert(0, '/mnt/ebs1/project') sys.path.insert(1, '/mnt/ebs1') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.configs.common.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() My httpd.conf file looks as: LoadModule wsgi_module /usr/lib/apache2/modules/mod_wsgi.so WSGIPythonHome /usr/bin/python2.6 WSGIScriptAlias / /mnt/ebs1/project/apache/django.wsgi <Directory /mnt/ebs1/project> Order allow,deny Allow from all </Directory> <Directory /mnt/ebs1/project/apache> Order allow,deny Allow from all </Directory> Alias /static/ /mnt/ebs1/project/static/ <Directory /mnt/ebs1/project/static> Order deny,allow Allow from all </Directory> The above configurations gives me Forbidden: You don't have permission to access / on this server. I tried to find the user which is running apache using ps aux which is www-data and has group www-data. I have tried to change the ownership of /mnt/ebs1 and its subdirectories using chown -R www-data:www-data /mnt/ebs1 but that still does not solve the problem. Can any one tell me what I am doing wrong or have missed?

    Read the article

  • Replacing compiz/metacity with openbox reduces workspaces to 1

    - by Brian
    I like to use the GNOME desktop, but I prefer to replace its window manager with openbox, with 4 workspaces. However, when I run openbox --replace, the number of workspaces available drops to 1. If I go into obconf, workspaces is still configured to be 4 (~/.config/openbox/rc.xml shows the same). I can get the workspaces to reappear by changing the value in obconf to anything else, and then back to 4. I have just been dealing with this problem since Ubuntu 9.04 (now up to 10.10) since I don't reboot very often. But it's really annoying to have to reset my workspaces whenever I do have to reboot. Changing the value in rc.xml and running openbox --reconfigure does not seem to have any effect. So what is obconf doing that I'm not (sends a dbus message perhaps [EDIT: watching with dbus-monitor I see no messages when changing the workspaces value in obconf])? I was hoping there would be a cleaner way to change the window manager than just running openbox --replace at login. So my questions are: Is there a better way to specify an alternate window manager (i.e. a way that doesn't cause the workspaces to break)? If not, how can I automatically set the number of workspaces back to 4? Update: I finally got around to trying what I commented on MrShunz's answer (adding WINDOW_MANAGER=/usr/bin/openbox to ~/.gnomerc). But the effect is the same as openbox --replace.

    Read the article

  • Unattended Kickstart Install

    - by Eric
    I've looked around quite a bit and have seen similar setup and questions, but none seem to work for me. I'm using the following command to create a custom ISO: /usr/bin/livecd-creator --config=/usr/share/livecd-tools/test.ks --fslabel=TestAppliance --cache=/var/cache/live This works great and it creates the ISO with all of the packages and configs I want on it. My issue is that I want the install to be unattended. However, every time I start the CD, it asks for all of the info such as keyboard, time zone, root password, etc. These are my basic settings I have in my kickstart script prior to the packages section. cdrom install autopart autostep xconfig --startxonboot rootpw testpassword lang en_US.UTF-8 keyboard us timezone --utc America/New_York auth --useshadow --enablemd5 selinux --disabled services --enabled=iptables,rsyslog,sshd,ntpd,NetworkManager,network --disabled=sendmail,cups,firstboot,ip6tables clearpart --all So after looking around, I was told that I need to modify my isolinux.cfg file to either do "ks=http://X.X.X.X/location/to/test.ks" or "ks=cdrom:/test.ks". I've tried both methods and it still forces me to go through the install process. When I tail the apache logs on the server, I see that the ISO never even tries to get the file. Below are the exact syntax I'm trying on my isolinux.cfg file. label http menu label HTTP kernel vmlinuz0 append initrd=initrd0.img ks=http://192.168.56.101/files/test.ks ksdevice=eth0 label localks menu label LocalKS kernel vmlinuz0 append initrd=initrd0.img ks=cdrom:/test.ks label install0 menu label Install kernel vmlinuz0 append initrd=initrd0.img root=live:CDLABEL=PerimeterAppliance rootfstype=auto ro liveimg liveinst noswap rd_NO_LUKS rd_NO_MD rd_NO_DM menu default EOF_boot_menu The first 2 give me a "dracut: fatal: no or empty root=" error until I give it a root= option and then it just skips the kickstart completely. The last one is my default option that works fine, but just requires a lot of user input. Any help would be greatly appreciated.

    Read the article

  • Using OSX home directories from linux

    - by Steffen
    I'm running an OSX (Snow Leopard) Server with OpenDirectory, which is nothing else than a modified OpenLDAP with some Apple-specific schemas. However, I want to reuse this directory on some of my Linux (Debian Squeeze) boxes. It's no problem to authenticate against OSXs LDAP Server, this works fine already. What I struggle with is the way the home folders are specified in OSX. If I query the passwd config on one of my linux machines, the OSX imported entries are looking like this myaccount:x:1034:1026:Firstname Lastname:/Network/Servers/hostname.example.com/Volumes/MyShare/Users/myaccount:/bin/bash While those network home folders might be fine for OSX-Clients, I don't want those server based paths on my linux machines. I saw that there is an NFSHomeDirectory Attribute in the OSX User inspector, but if I change this the whole user home path gets changed. Since my users should be able to login on both systems, OSX and Linux, this is not what I want. Does anyone have an idea how I must configure OSX to make my linux machines use home folders like /net/myaccount and leave the configuration for OSX clients untouched?

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >