Search Results

Search found 15674 results on 627 pages for 'bash date'.

Page 414/627 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • Optimize shell and awk script

    - by bryan
    I am using a combination of a shell script, awk script and a find command to perform multiple text replacements in hundreds of files. The files sizes vary between a few hundred bytes and 20 kbytes. I am looking for a way to speed up this script. I am using cygwin. The shell script - #!/bin/bash if [ $# = 0 ]; then echo "Argument expected" exit 1 fi while [ $# -ge 1 ] do if [ ! -f $1 ]; then echo "No such file as $1" exit 1 fi awk -f ~/scripts/parse.awk $1 > ${1}.$$ if [ $? != 0 ]; then echo "Something went wrong with the script" rm ${1}.$$ exit 1 fi mv ${1}.$$ $1 shift done The awk script (simplified) - #! /usr/bin/awk -f /HHH.Web/{ if ( index($0,"Email") == 0) { sub(/HHH.Web/,"HHH.Web.Email"); } printf("%s\r\n",$0); next; } The command line find . -type f | xargs ~/scripts/run_parser.sh

    Read the article

  • Kickstart CentOS 6 prompting for TCP/IP with network set to DHCP

    - by Andy Shinn
    I am trying to stop my kickstart CentOS install prompting me for TCP/IP information. After I click through this prompt (keeping IPv4 and IPv6 to their defaults) the installation continues and completes just fine. Below is my kickstart file: # Andy's super awesome VM kickstart file install url --url=http://mirrors.kernel.org/centos/6/os/x86_64 lang en_US.UTF-8 keyboard us text %include /tmp/network.ks rootpw --iscrypted $6$RA8DyrNTsVJkGIgY$ohZ62HHiOjNnn1yDMZlIu3lQ63D3plGPcbVZtPKE8Oq6Z.IGUgN.kNLkxs/ZymZuluRDWsW2eey5zLOl2G3mp. firewall --service=ssh authconfig --enableshadow --passalgo=sha512 selinux --disabled timezone America/Los_Angeles bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work zerombr clearpart --all --drives=vda --initlabel part /boot --fstype=ext4 --size=500 part pv.253002 --grow --size=1 volgroup vg1 --pesize=4096 pv.253002 logvol / --fstype=ext4 --name=lv_root --vgname=vg1 --grow --size=1024 --maxsize=51200 logvol swap --name=lv_swap --vgname=vg1 --grow --size=4032 --maxsize=4032 repo --name="CentOS" --baseurl=http://mirrors.kernel.org/centos/6/os/x86_64 --cost=100 repo --name="Puppet Labs Products" --baseurl=http://yum.puppetlabs.com/el/6/products/x86_64 repo --name="Puppet Labs Dependencies" --baseurl=http://yum.puppetlabs.com/el/6/dependencies/x86_64 repo --name="EyeFi" --baseurl=http://flexo.eye.fi/6/eye-fi-api %packages @core @server-policy puppet facter %end %pre --erroronfail #!/bin/bash for x in `cat /proc/cmdline`; do case $x in SERVERNAME*) eval $x echo "network --onboot yes --device eth0 --bootproto dhcp --hostname ${SERVERNAME}.eye.fi" /tmp/network.ks ;; esac; done %end %post puppet agent --waitforcert 10 --onetime --no-daemon --pluginsync --server puppet.eye.fi %end reboot My kernel arguments are in this following virt-install command that I use to start the install: virt-install -n zabbix -r 2048 --vcpus=2 -l http://mirrors.kernel.org/centos/6/os/x86_64 --disk /dev/vg_inf1/zabbix --network bridge=br85 --initrd-inject=/home/ashinn/vm_kickstart --extra-args "ks=file:/vm_kickstart SERVERNAME=zabbix" --autostart During the install, I can pull up a console on the second terminal and verify the contents of /tmp/network.ks are: network --onboot=yes --bootproto=dhcp --ipv6=auto --hostname=jenkins2.mydomain.com Why might Anaconda be prompting for the TCP/IP settings when they are already set to DHCP?

    Read the article

  • Error when starting qpidd as a service

    - by Sparks
    I have recently swapped from CENTOS 5 to FEDORA 17. Previously I have created my own init.d scripts successfully (albeit not for qpidd) however, in FEDORA I cannot get it to work. I have created the following script (called qpidd) in the init.d directory: #!/bin/bash # # /etc/rc.d/init.d/qpidd # # QPID/AMQP Broker scripts # # # chkconfig: 2345 20 80 # description: QPID/AMQP Broker service # processname: qpidd # pidfile: /var/lock/subsys/qpidd # Source function library. . /etc/init.d/functions SERVICENAME=qpidd start() { echo -n "Starting $SERVICENAME: " daemon qpidd -d & retval=$? touch /var/lock/subsys/$SERVICENAME return $retval } stop() { echo -n "Shutting down $SERVICENAME: " qpidd -q & retval=$? rm -f /var/lock/subsys/$SERVICENAME return $retval } case "$1" in start) start ;; stop) stop ;; status) status qpidd ;; restart) stop start ;; condrestart) [ -f /var/lock/subsys/<service> ] && restart || : ;; *) echo "Usage: $SERVICENAME {start|stop|status|restart" exit 1 ;; esac exit $? After this, I ran chkconfig --add qpidd, however, now when I run sudo service qpidd start I get the following message: Starting qpidd (via systemctl): Job failed. See system journal and 'systemctl status' for details. If I then run systemctl status qpidd I get the following message: Failed to issue method call: Unit name qpidd is not valid. I am now lost, I have search the web and Stack Overflow but cannot find anybody with similar problem, any help or direction to a website that can help would be much appreciated Sparks :)

    Read the article

  • Advanced Linux file permission question (ownership change during write operation)

    - by Kent
    By default the umask is 0022: usera@cmp$ touch somefile; ls -l total 0 -rw-r--r-- 1 usera usera 0 2009-09-22 22:30 somefile The directory /home/shared/ is meant for shared files and should be owned by root and the shared group. Files created here by usern (any user) are automatically owned by the shared group. There is a cron-job taking care of changing owning user and owning group (of any moved files) once per day: usera@cmp$ cat /etc/cron.daily/sharedscript #!/bin/bash chown -R root:shared /home/shared/ chmod -R 770 /home/shared/ I was writing a really large file to the shared directory. It had me (usera) as owning user and the shared group as group owner. During the write operation the cron job was run, and I still had no problem completing the write process. You see. I thought this would happen: I am writing the file. The file permissions and ownership data for the file looks like this: -rw-r--r-- usera shared The cron job kicks in! The chown line is processed and now the file is owned by the root user and the shared group. As the owning group only has read access to the file I get a file write error! Boom! End of story. Why did the operation succeed? A link to some kind of reference documentation to back up the reason would be very welcome (as I could use it to study more details).

    Read the article

  • Millions of files in php's tmp error - how to delete?

    - by Jonatan Littke
    Hey. I've got a tmp-folder with 14 million php session files in my home directory. At least that's what I think it is, it's not like I could ls it or anything. How can I empty this folder? I've tried using find with the -exec rm {} \; commands but that didn't work. ls 'sess_0*' | xargs rm did neither. I'm currently running rm -rf tmp but after two hours the folder appears to be the same size. REFERENCE INFO: I suddenly encountered an error where SESSIONS could no longer be written to disk: [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: open(/var/www/clients/client1/web1/tmp/sess_8e12742b62aa68a3f9476ec80222bbfb, O_RDWR) failed: No space left on device (28) in Unknown on line 0 [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/www/clients/client1/web1/tmp) in Unknown on line 0 I ran: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 457G 126G 308G 29% / tmpfs 1.8G 0 1.8G 0% /lib/init/rw udev 10M 664K 9.4M 7% /dev tmpfs 1.8G 0 1.8G 0% /dev/shm But as you can see, the disk isn't full. So I had a look in the syslog which says the following 20 times per second: kernel: [19570794.361241] EXT3-fs warning (device md0): ext3_dx_add_entry: Directory index full! This led me thinking to a full folder, obviously, but since my web folder only has 60k files (having counted them), I guessed it was the tmp folder (the local one, for this instance of php) that messed things up. Some commands I ran: $ sudo ls sess_a* | xargs rm -f bash: /usr/bin/sudo: Argument list too long find . -exec rm {} \; rm: cannot remove directory '.' find: cannot fork: Cannot allocate memory I'm running Debian Lenny, php5, ISPConfig, SuEXEC and Fast-CGI.

    Read the article

  • Cannot connect to Solaris Server when Oracle GoldenGate process uses the port

    - by Abdallah Ghrb
    I'm trying to test the Oracle goldengate to replicate data between two databases on the same server. So I installed two databases and two goldengate homes in the same machine. The goldengate processes are started from each home and they are responsible for the replication : Process from home 1 configured on port 7809 & process for home 2 configured on port 7810. For a successful replication, processes started from goldengate home 1 should communicate with processes started from goldengate home 2. But for some reasons, this is not happening. The goldengate log file has the following error : OGG-01223 Oracle GoldenGate Capture for Oracle, exthrr.prm: TCP/IP error 131 (Connection reset by peer). "Googling" for this error, it said that the connection occurred but the host terminated it. Tried to telnet the machine with the used port and it gave the following error: bash-3.00$ telnet 10.10.3.124 7810 Trying 10.10.3.124... Connected to 10.10.3.124. Escape character is '^]'. Connection to 10.10.3.124 closed by foreign host. Here the communication occurs but for only around 3 seconds then it is closed by the host, which is the same explanation of the above error in the goldengate log file. I tried to change the port but still the same error. The problem is happening only when the goldengate process is using the port. When other process is using the same port, I can telnet successfully.

    Read the article

  • Login with Enterprise Principal Name using sssd AD backend in Ubuntu 14.04 LTS

    - by Vinícius Ferrão
    I’m running sssd version 1.11 with the AD backend in Ubuntu 14.04 LTS (1.11.5-1ubuntu3) to authenticate users from Active Directory running on Windows Server 2012 R2, and I’m trying to achieve logins with the User Principal Name for all users of the domain. But the UPN are always Enterprise Principal Names. Let-me illustrate the problem with my user account: Domain: local.example.com sAMAccountName: ferrao UPN: [email protected] (there’s no local in the UPN) I can successfully login with the sAMAccountName atribute, which is fine, but I can’t login with [email protected] which is my UPN. The optimum solution for me is to allow logins from sAMAccountName and the UPN (User Principal Name). If’s not possible, the UPN should be the right way instead of the sAMAccountName. Another annoyance is the homedir pattern with those options in sssd.conf: default_shell = /bin/bash fallback_homedir = /home/%d/%u What I would like to achieve is separated home directories from the EPN. For example: /home/example.com/user /home/whatever.example.com/user But with this pattern I can’t map the way I would like to do. I’ve looked through man pages and was unable to find any answers for this issues. Thanks,

    Read the article

  • How to configure a shortcut for an SSH connection through a SSH tunnel

    - by Simone Carletti
    My company production servers (FOO, BAR...) are located behind two gateway servers (A, B). In order to connect to server FOO, I have to open a ssh connection with server A or B with my username JOHNDOE, then from A (or B) I can access any production server opening a SSH connection with a standard username (let's call it WEBBY). So, each time I have to do something like: ssh johndoe@a ... ssh webby@foo ... # now I can work on the server As you can imagine, this is a hassle when I need to use scp or if I need to quickly open multiple connections. I have configured a ssh key and also I'm using .ssh/config for some shortcuts. I was wondering if I can create some kind of ssh configuration in order to type ssh foo and let SSH open/forward all the connections for me. Is it possible? Edit womble's answer is exactly what I was looking for but it seems right now I can't use netcat because it's not installed on the gateway server. weppos:~ weppos$ ssh foo -vv OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /Users/xyz/.ssh/config debug1: Applying options for foo debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Executing proxy command: exec ssh a nc -w 3 foo 22 debug1: permanently_drop_suid: 501 debug1: identity file /Users/xyz/.ssh/identity type -1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_rsa type 1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_dsa type 2 bash: nc: command not found ssh_exchange_identification: Connection closed by remote host

    Read the article

  • Need help with custom init script

    - by churnd
    I'm trying to set up an init script for a process on redhat linux: #!/bin/sh # # Startup script for Conquest # # chkconfig: 345 85 15 - start or stop process definition within the boot process # description: Conquest DICOM Server # processname: conquest # pidfile: /var/run/conquest.pid # Source function library. This creates the operating environment for the process to be started . /etc/rc.d/init.d/functions CONQ_DIR=/usr/local/conquest case "$1" in start) echo -n "Starting Conquest DICOM server: " cd $CONQ_DIR && daemon --user mruser ./dgate -v - Starts only one process of a given name. echo touch /var/lock/subsys/conquest ;; stop) echo -n "Shutting down Conquest DICOM server: " killproc conquest echo rm -f /var/lock/subsys/conquest rm -f /var/run/conquest.pid - Only if process generates this file ;; status) status conquest ;; restart) $0 stop $0 start ;; reload) echo -n "Reloading process-name: " killproc conquest -HUP echo ;; *) echo "Usage: $0 {start|stop|restart|reload|status}" exit 1 esac exit 0 However, the cd $CONQ_DIR is getting ignored, because the script errors out: # ./conquest start Starting Conquest DICOM server: -bash: ./dgate: No such file or directory [FAILED] For some reason, I have to run dgate as ./dgate. I cannot specify the full path /usr/local/conquest/dgate The software came with an init script for a Debian system, so the script uses start-stop-daemon, with the option --chdir to where dgate is, but I haven't found a way to do this with the Redhat daemon function.

    Read the article

  • Apache 2.2 + mod_fcgid + PHP 5.4: (104) Connection reset by peer

    - by Michele Piccirillo
    On a Debian 6 VPS, I'm running PHP 5.4 via mod_fcgid on a couple of different virtual hosts, managed by Virtualmin GPL. At random, I get 500 Internal Server Errors; restarting Apache brings everything back to normality. Examining the logs, I find messages of this kind: [Thu Oct 04 15:39:35 2012] [warn] [client 173.252.100.117] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Thu Oct 04 15:39:35 2012] [error] [client 173.252.100.117] Premature end of script headers: index.php Any ideas about what is happening? UPDATE: I found a similar question and the author reported to have solved the problem disabling APC. I tried following the advice, but I'm still getting the same errors. VirtualHost configuration SuexecUserGroup "#1000" "#1000" ServerName example.com DocumentRoot /home/example/public_html ScriptAlias /cgi-bin/ /home/example/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/example/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php AddHandler fcgid-script .php5 FCGIWrapper /home/example/fcgi-bin/php5.fcgi .php FCGIWrapper /home/example/fcgi-bin/php5.fcgi .php5 </Directory> <Directory /home/example/cgi-bin> allow from all </Directory> RemoveHandler .php RemoveHandler .php5 IPCCommTimeout 61 FcgidMaxRequestLen 1073741824 php5.fcgi #!/bin/bash PHPRC=$PWD/../etc/php5 export PHPRC umask 022 export PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=99999 export PHP_FCGI_MAX_REQUESTS SCRIPT_FILENAME=$PATH_TRANSLATED export SCRIPT_FILENAME exec /usr/bin/php5-cgi Package versions webmin-virtual-server/virtualmin-universal 3.94.gpl-2 apache2/squeeze 2.2.16-6+squeeze8 libapache2-mod-fcgid/squeeze 1:2.3.6-1+squeeze1 php5 5.4.7-1~dotdeb.0 php5-apc 5.4.7-1~dotdeb.0

    Read the article

  • mysql command line not working

    - by Sandeepan Nath
    I have mysql running in my fedora system. I have xampp setup on the system and php projects present in the webspace are working fine. PhpMyAdmin is working fine. echoing phpinfo() in a PHP script also shows mysql enabled. But running mysql connect command mysql -u[username] -p[password] Gives this - bash: mysql: command not found How do I fix that? Any pointers? I guess I need to do some pointing (define some path in some file) so that my system knows that mysql is installed. What exactly do I have to do? Additional Details This system was someone else's and he is not available here. May be PHP/Mysql was setup already in the system. I just freshly extracted xampp for linux into /opt/lampp/ and have put all the above mentioned things (PHP projects and PhpMyAdmin) there. After doing that I had a socket problem (PhpMyAdmin was not working and showing this)- #2002 - The server is not responding (or the local MySQL server's socket is not correctly configured) I restarted lampp using ./lampp restart but problem remained. Then after turning on system today, I started lampp and everything worked just fine. No project issues anymore only command line Mysql not working

    Read the article

  • Can MS Services for Unix be deployed and accessed from a shared drive?

    - by Ian C.
    I'm interested in experimenting with replacing our dependency on MKS with MS' Sevices for Unix toolset. I was wondering if anyone has any experience with deploying SFU on a shared drive? We like to, wherever possible, host our dev tools on one central NAS and call to the NAS to access the tools instead of rolling stuff out to each and every desktop. I'm not interested in the NFS support or ActiveState Perl. Really, none of the daemon technology is required here. I'm looking for replacements for the coreutils/binutils stuff you find in Linux (and MKS on Windows): sed, awk, csh, bash, grep, ls, find -- the meat-and-potates command line apps that our build and test scripts are built around. If I limit the install to just the Interix GNU Components (and maybe the Remote Connectivity components) will is run nicely from a shared location? To head off some questions: Yes, I've looked at Cygwin. Unfortunately it's performance in our build and test environment is poor. It runs considerably slower than MKS and it's not a direct drop-in replacement for MKS (thanks to its internal pathing and limitations with commands like 'ps'), so it's a tougher sell. Yes, I'm looking at the MinGW offering in parallel to this.

    Read the article

  • Ubuntu init.d script not being called on startup

    - by Mike
    I've got a script in ubuntu 9.04 in init.d that I've set to run on start on with update-rc.d using update-rc.d init_test defaults 99. All of the symlinks are there and the permissions appear to be correct -rwxr-xr-x 1 root root 642 2010-10-28 16:44 init_test mike@xxxxxxxxxx:~$ find /etc -name S99* | grep init_test find: /etc/rc5.d/S99init_test find: /etc/rc4.d/S99init_test find: /etc/rc2.d/S99init_test find: /etc/rc3.d/S99init_test The script runs through source and ./ without issue and behaves correctly. Here is the source of the script: #!/bin/bash ### BEGIN INIT INFO # Provides: init test script # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO start() { echo "hi" echo "start called" >> /tmp/test.log return } stop() { echo "Stopping" } echo "Script called" >> /tmp/test.log case "$1" in start) start ;; stop) stop ;; *) echo "Usage: {start|stop|restart}" exit 1 ;; esac exit $? When the machine starts, I don't see "script called" or "start called" in the test.log at all. Is there anything obvious I'm messing up?

    Read the article

  • Can't pin modified shortcuts to the Windows 7 task bar

    - by Coder
    I have a shortcut to a .bat file which I pin to the task bar using a workaround by using another icon and this seems to work. Now I make a copy of that shortcut, point it to a different .bat file, rename it, and I can't pin this one to the task bar. I have to find some other new unused icon to pin, pin it, then modify it manually. The other problem this causes is that windows seems to track which icons were pinned even if they are modified after the fact. As such, if I use media player as my dummy icon, pin it, then alter it's name and shortcut to point to a .bat file, I can't re-pin windows media player and if I select unpin from the windows media player, it unpins my shortcut to my .bat file. I can't believe how ridiculous this is. Is there a way to pin anything I want to the taskbar (ie. .bat file in my case) that does not cause problems like this? Is there an easy way I can copy an existing shortcut and modify it and re-pin it to the taskbar? The reason I want to copy it is because I start a .bat file (in particular git bash) and I set properties on the window like quick edit, increase the screen buffer and set it's position and size manually. I don't want to have to do this to every single icon I want to pin since they will be identical aside from the shortcut url.

    Read the article

  • Get the desktop/viewport of a window in enlightenment?

    - by Zorf
    Okay, so, given a XID of a window I need to get its desktop or viewport as well as the currently active one. Enlightenment does not seem to properly respond to wmctrl which leads to: ***@note:~ > wmctrl -lG 0x01e00002 -1 21 395 310 146 note Conky (note) # it places conkey wndows on -1 for some reason? 0x01c00002 -1 65 655 230 158 note Conky (note) 0x01a00002 -1 25 215 230 182 note Conky (note) 0x01800002 -1 25 550 310 110 note Conky (note) 0x01600002 -1 685 145 230 120 note Conky (note) 0x01400002 -1 1120 245 280 206 note Conky (note) 0x01200002 -1 1095 35 230 186 note Conky (note) 0x01000002 -1 1145 470 250 266 note Conky (note) 0x00c00002 -1 40 34 230 182 note Conky (note) 0x00e00029 0 0 0 1440 900 note ~ : bash – Konsole # desktop 2, fullscreen 0x03a00060 0 505 231 899 642 note Downloads – 'Dolphin' # destkop 0 0x0480001a 0 206 222 958 526 note Lifelover - Kärlek - becksvart melankoli #desk 2 0x034000e6 0 116 32 984 767 note clemctrl – Kate #desk 0 0x02c01b78 0 309 314 549 520 note ************* # desk 1 0x04e00062 0 104 31 990 619 note XChat: *** @ Free / #*** (+Ccnt) #desk 1 0x05c00112 0 22 35 1396 834 note StarCraft on Reddit - Chromium #desk 3 0x02c0f292 0 453 356 549 520 note *** #desk 1 0x02c000c0 0 860 216 557 645 note Buddy List # desk 1 As can be seen, all windows are on desk 0 in wmctrl except conky windows. Furthermore the geometry-viewport trick also doesn't seem to work that works in some wm's, are there any other tricks to get on which viewport/desktop a window is? There has to be some way to get it right?

    Read the article

  • init.d script runs correctly but process doesn't live when booted fully up

    - by thetrompf
    I have a problem with an init.d script #!/bin/bash ES_HOME="/var/es/current" PID=$(ps ax | grep elasticsearch | grep $ES_HOME | grep -v grep | awk '{print $1}') #echo $PID #exit 0 case "$1" in start) if [ -z "$PID" ]; then echo "Starting Elasticsearch" echo "Starting Elasticsearch" /var/tmp/elasticsearch su -m elasticsearch -c "${ES_HOME}/bin/elasticsearch" exit 0; else echo "Elasticsearch already running" echo "Elasticsearch already running" /var/tmp/elasticsearch exit 0; fi ;; stop) if [ -n "$PID" ]; then echo "Stopping Elasticsearch" kill ${PID} echo "Stopped Elasticsearch" exit 0; else echo "Elasticsearch is not running" exit 0; fi ;; esac The scripts runs just file, as I can see in /var/tmp/elasticsearch a new line is added after every boot, but if I run: /etc/init.d/elasticsearch stop Just after the server is booted, I get "Elasticsearch is not running", ergo somehow the process does not stay alive. My question is why? and what am I doing wrong? Thanks in advance.

    Read the article

  • Mongrel Cluster on Ubuntu Server Karmic

    - by trobrock
    I am trying to get mongrel cluster working on my Ubuntu Server Karmic box in preparation to setup Capistrano. I've been trying to get the two to work all day and finally decided to completely remove Capistrano and see if I can just get Mongrel Cluster to work. I ran this to install mongrel cluster: gem install mongrel mongrel_cluster Everything installed fine, when I change into my app's directory... # mongrel_rails -bash: mongrel_rails: command not found I can run it from its install location: # /var/lib/gems/1.8/bin/mongrel_rails Usage: mongrel_rails <command> [options] Available commands are: ... It lets me build the cluster configuration file fine, but when I run the clister:start command: # /var/lib/gems/1.8/bin/mongrel_rails cluster::start starting port 8000 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8000 -P tmp/pids/mongrel.8000.pid -l log/mongrel.8000.log starting port 8001 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8001 -P tmp/pids/mongrel.8001.pid -l log/mongrel.8001.log starting port 8002 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8002 -P tmp/pids/mongrel.8002.pid -l log/mongrel.8002.log It seems it isnt calling it from the right directory after that command, what can I do to fix this? I tried setting the path previously when trying to set up Capistrano, but the path didnt stay set when Capistrano used ssh to run the commands.

    Read the article

  • Using OSX home directories from linux

    - by Steffen
    I'm running an OSX (Snow Leopard) Server with OpenDirectory, which is nothing else than a modified OpenLDAP with some Apple-specific schemas. However, I want to reuse this directory on some of my Linux (Debian Squeeze) boxes. It's no problem to authenticate against OSXs LDAP Server, this works fine already. What I struggle with is the way the home folders are specified in OSX. If I query the passwd config on one of my linux machines, the OSX imported entries are looking like this myaccount:x:1034:1026:Firstname Lastname:/Network/Servers/hostname.example.com/Volumes/MyShare/Users/myaccount:/bin/bash While those network home folders might be fine for OSX-Clients, I don't want those server based paths on my linux machines. I saw that there is an NFSHomeDirectory Attribute in the OSX User inspector, but if I change this the whole user home path gets changed. Since my users should be able to login on both systems, OSX and Linux, this is not what I want. Does anyone have an idea how I must configure OSX to make my linux machines use home folders like /net/myaccount and leave the configuration for OSX clients untouched?

    Read the article

  • Scriptable BitTorrent clients?

    - by James McMahon
    In an effort further automate all the little computer house keeping tasks that can waste my time I am looking into BitTorrent clients that have the ability to script common tasks. I've done some Googling and it looks like Transmission might have some of said such capabilities, but there site wasn't very clear on the details. Things I am looking to do; Prioritize and label torrents based on trackers Set seed length based on trackers and filesize Set additional seed time when a torrent's seed time expires based on a number of factors, like time spent seeding, remaining disk space and ratio. Move torrents to appropriate places post seeding based on labels and tracker Basically, while I could Python or Bash script things like moving torrents around and other simple actions, I need away to talk to the client to figure out things like the torrent seed time, tracker, labels, filesize, etc. Is there any client out there that would allow me to all or a subset these actions? I have access to Linux, Mac and Windows and am not tied to any particular torrent client. I am a programmer so I have no problems writing scripts, but examples of torrent scripting would also be helpful.

    Read the article

  • How can I both pipe and display output in Windows' command line?

    - by Bob
    I have a process I need to run within a batch file. This process produces some output. I need to both display this output to the screen and send (pipe) it to another program. The bash method uses tee: echo 'ee' | tee /dev/tty | foo Is there an equivalent for Windows? I am happy to use PowerShell if necessary. There are tee ports for Windows, but there does not appear to be an equivalent for /dev/tty, which complicates matters. The specific use-case here: I have a program (launch4j) that I need to run, displaying output to the user. At the same time, I need to be able to detect success or failure in the script. Unfortunately, this program does not set an exit code, and I cannot force it to do so. My current workaround involves piping to find, to search the output (launch4j config.xml | find "Successfully created") - however, that swallows the output I need to display. Therefore, I need some way to both display to the screen and send the ouput to a command - and this command should be able to set ERRORLEVEL (it cannot run asynchronously).

    Read the article

  • Force Juniper-network client to use split routing

    - by craibuc
    I'm using the Juniper client for OSX ('Network Connect') to access a client's VPN. It appears that the client is configured to not use split-routing. The client's VPN host is not willing to enable split-routing. Is there a way for me to over-ride this configuration or do sometime on my workstation to get the non-client network traffic to by-pass the VPN? This wouldn't be a big deal, but none of my streaming radio stations (e.g. XM) work will connected to their VPN. Apologies for any inaccuracies in the terminology. ** edit ** The Juniper client changes my system's resolve.conf file from: nameserver 192.168.0.1 to: search XXX.com [redacted] nameserver 10.30.16.140 nameserver 10.30.8.140 I've attempted to restore my preferred DNS entry to the file $ sudo echo "nameserver 192.168.0.1" >> /etc/resolv.conf but this results in the following error: -bash: /etc/resolv.conf: Permission denied How does the super-user account not have access to this file? Is there a way to prevent the Juniper client from making changes to this file?

    Read the article

  • Can I recover a zpool after it's been exported, given that devices have not been reallocated?

    - by cali-spc
    I had a zpool we'll call 'testpool'. testpool had 3 devices included in it, and a single zfs called 'test'. I needed to move 'test' to a new, smaller pool. I wanted to name the new pool the same name 'testpool'. Basically did the following. zfs send testpool@backup > /tmp/test-dump zpool export -f testpool zpool create -f testpool newdevice zfs receive -F testpool < /tmp/test-dump Unfortunately I found out that the testpool@backup snapshot was the wrong snapshot. Too old. I have yet to reallocate the three devices that were in the OLD testpool. (None of these 3 devices are 'newdevice', they are a separate 3.) Is there any way I can recover data in those devices? I'm thinking since I named the new, smaller pool the same as the old zpool, I'm pretty much SOL. But if not, that would be nice to know. Edit: More info I did a 'zpool import' and got this. bash-3.00# zpool import pool: testpool id: 14781458723915654709 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: testpool ONLINE c5t8d0 ONLINE c5t9d0 ONLINE c5t10d0 ONLINE So I'm guessing I just need the syntax to import this zpool using its numeric identifier, while giving it a new name. S.

    Read the article

  • qsub: How can I find out what DRM middleware exactly is installed on a cluster?

    - by gojira
    I have a user account on a very big cluster. I have previous experience with Grid Engine and want to use the cluster for array jobs. The documentation tells me to use "qsub" for load balancing / submission of many jobs. Therefore I assumed this means the cluster has Grid Engine. However all my Grid Engine scripts failed to run. I checked the documentation and it is a bit weird. Now I slowly suspect that this cluster does not actually have Grid Engine, maybe it's running something called Torque (?!). The whole terminology in the man pages is a bit weird for me as a Grid Engine user, for example they talk about "bulk jobs" instead of "array jobs". There is no referral to variables on which I rely on, like SGE_TASK_ID etc. Instead they refer to variables starting with PBS_. Still, there are qsub and qstat commands. Also qsub behaves differently, apparently it is not possible to specifiy the command line parameters with bash-script comments etc. There is a documentation for the cluster system, but it does not say what the DRM middleware actually is - it refers to the entire DRM system simply as "qsub". I tried qsub --version qsub: 1.2 2010/8/17 I am not sure what I am actually running when I invoke qsub on that cluster! My question is, how can I find out if I am running Grid Engine or Torque (or whatever it is), and which version?

    Read the article

  • Cannot load from raid with grub

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... This is my partial fdisk -l output: Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Anybody can resolve this issue? I have big headache with this.

    Read the article

  • Cutting Ubuntu to the bone for Virtualbox VM

    - by user32853
    I've been looking around for a Linux variant which will install only the software I need rather than everything Ubuntu (for example) puts in by default. This is to create a virtual machine in Virtualbox which has bash, apache, python, perl, SQLite, openssh and a few other programs but nothing else. I'd prefer to go with Ubuntu if possible but another modern distro would do as well (I like using apt-get and yum rather than downloading/compiling etc). So far, I've tried: SuseStudio.com, which is probably the best so far. Pressing F4 to get the boot options on Ubuntu 9.10, but there is no minimal installation (I think there was once). Arch Linux, slightly confusing install procedure but I might go back and try again. Gentoo, started well but fairly soon the HD on the virtual machine went to 2Gb, even before the installation had started in earnest (I'd partitioned the disks is all). I realise there are various "small" Linuxes around like Puppy, Feather, DSL, etc, but they seem to be aimed at desktop users or as a techie's toolkit, and I want a small-as-possible server distro which can be managed with tools like apt or yum or similar. TIA for any advice you can offer! -- Monty

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >