Search Results

Search found 5228 results on 210 pages for 'bash alias'.

Page 197/210 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • mocking collection behavior with Moq

    - by Stephen Patten
    Hello, I've read through some of the discussions on the Moq user group and have failed to find an example and have been so far unable to find the scenario that I have. Here is my question and code: // 6 periods var schedule = new List<PaymentPlanPeriod>() { new PaymentPlanPeriod(1000m, args.MinDate.ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(1).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(2).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(3).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(4).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(5).ToString()) }; // Now the proxy is correct with the schedule helper.Setup(h => h.GetPlanPeriods(It.IsAny<String>(), schedule)); Then in my tests I use Periods but the Mocked _PaymentPlanHelper never populates the collection, see below for usage: public IEnumerable<PaymentPlanPeriod> Periods { get { if (CanCalculateExpression()) _PaymentPlanHelper.GetPlanPeriods(this.ToString(), _PaymentSchedule); return _PaymentSchedule; } } Now if I change the mocked object to use another overloaded method of GetPlanPeriods that returns a List like so : var schedule = new List<PaymentPlanPeriod>() { new PaymentPlanPeriod(1000m, args.MinDate.ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(1).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(2).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(3).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(4).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(5).ToString()) }; helper.Setup(h => h.GetPlanPeriods(It.IsAny<String>())).Returns(new List<PaymentPlanPeriod>(schedule)); List<PaymentPlanPeriod> result = new _PaymentPlanHelper.GetPlanPeriods(this.ToString()); This works as expected. Any pointers would be awesome, as long as you don't bash my architecture... :) Thank you, Stephen

    Read the article

  • Make seems to think a prerequisite is an intermediate file, removes it

    - by James
    For starters, this exercise in GNU make was admittedly just that: an exercise rather than a practicality, since a simple bash script would have sufficed. However, it brought up interesting behavior I don't quite understand. I've written a seemingly simple Makefile to handle generation of SSL key/cert pairs as necessary for MySQL. My goal was for make <name> to result in <name>-key.pem, <name>-cert.pem, and any other necessary files (specifically, the CA pair if any of it is missing or needs updating, which leads into another interesting follow-up exercise of handling reverse deps to reissue any certs that had been signed by a missing/updated CA cert). After executing all rules as expected, make seems to be too aggressive at identifying intermediate files for removal; it removes a file I thought would be "safe" since it should have been generated as a prereq to the main rule I'm invoking. (Humbly translated, I likely have misinterpreted make's documented behavior to suit my expectation, but don't understand how. ;-) Edited (thanks, Chris!) Adding %-cert.pem to .PRECIOUS does, of course, prevent the deletion. (I had been using the wrong syntax.) Makefile: OPENSSL = /usr/bin/openssl # Corrected, thanks Chris! .PHONY: clean default: ca clean: rm -I *.pem %: %-key.pem %-cert.pem @# Placeholder (to make this implicit create a rule and not cancel one) Makefile: @# Prevent the catch-all from matching Makefile ca-cert.pem: ca-key.pem $(OPENSSL) req -new -x509 -nodes -days 1000 -key ca-key.pem $@ %-key.pem: $(OPENSSL) genrsa 2048 $@ %-cert.pem: %-csr.pem ca-cert.pem ca-key.pem $(OPENSSL) x509 -req -in $ $@ Output: $ make host1 /usr/bin/openssl genrsa 2048 ca-key.pem /usr/bin/openssl req -new -x509 -nodes -days 1000 -key ca-key.pem ca-cert.pem /usr/bin/openssl genrsa 2048 host1-key.pem /usr/bin/openssl req -new -days 1000 -nodes -key host1-key.pem host1-csr.pem /usr/bin/openssl x509 -req -in host1-csr.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 host1-cert.pem rm host1-csr.pem host1-cert.pem This is driving me crazy, and I'll happily try any suggestions and post results. If I'm just totally noobing out on this one, feel free to jibe away. You can't possibly hurt my feelings. :)

    Read the article

  • Why is the Clojure Hello World program so slow compared to Java and Python?

    - by viksit
    Hi all, I'm reading "Programming Clojure" and I was comparing some languages I use for some simple code. I noticed that the clojure implementations were the slowest in each case. For instance, Python - hello.py def hello_world(name): print "Hello, %s" % name hello_world("world") and result, $ time python hello.py Hello, world real 0m0.027s user 0m0.013s sys 0m0.014s Java - hello.java import java.io.*; public class hello { public static void hello_world(String name) { System.out.println("Hello, " + name); } public static void main(String[] args) { hello_world("world"); } } and result, $ time java hello Hello, world real 0m0.324s user 0m0.296s sys 0m0.065s and finally, Clojure - hellofun.clj (defn hello-world [username] (println (format "Hello, %s" username))) (hello-world "world") and results, $ time clj hellofun.clj Hello, world real 0m1.418s user 0m1.649s sys 0m0.154s Thats a whole, garangutan 1.4 seconds! Does anyone have pointers on what the cause of this could be? Is Clojure really that slow, or are there JVM tricks et al that need to be used in order to speed up execution? More importantly - isn't this huge difference in performance going to be an issue at some point? (I mean, lets say I was using Clojure for a production system - the gain I get in using lisp seems completely offset by the performance issues I can see here). The machine used here is a 2007 Macbook Pro running Snow Leopard, a 2.16Ghz Intel C2D and 2G DDR2 SDRAM. BTW, the clj script I'm using is from here and looks like, #!/bin/bash JAVA=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home/bin/java CLJ_DIR=/opt/jars CLOJURE=$CLJ_DIR/clojure.jar CONTRIB=$CLJ_DIR/clojure-contrib.jar JLINE=$CLJ_DIR/jline-0.9.94.jar CP=$PWD:$CLOJURE:$JLINE:$CONTRIB # Add extra jars as specified by `.clojure` file if [ -f .clojure ] then CP=$CP:`cat .clojure` fi if [ -z "$1" ]; then $JAVA -server -cp $CP \ jline.ConsoleRunner clojure.lang.Repl else scriptname=$1 $JAVA -server -cp $CP clojure.main $scriptname -- $* fi

    Read the article

  • Why does Fabric display the disconnect from server message for almost 2 minutes?

    - by Matthew Rankin
    Fabric displays Disconnecting from username@server... done. for almost 2 minutes prior to showing a new command prompt whenever I issue a fab command. This problem exists when using Fabric commands issued to both an internal server and a Rackspace cloud server. Below I've included the auth.log from the server, and I didn't see anything in the logs on my MacBook. Any thoughts as to what the problem is? Server's SSH auth.log with LogLevel VERBOSE Apr 21 13:30:52 qsandbox01 sshd[19503]: Accepted password for mrankin from 10.10.100.106 port 52854 ssh2 Apr 21 13:30:52 qsandbox01 sshd[19503]: pam_unix(sshd:session): session opened for user mrankin by (uid=0) Apr 21 13:30:52 qsandbox01 sudo: mrankin : TTY=unknown ; PWD=/home/mrankin ; USER=root ; COMMAND=/bin/bash -l -c apache2ctl graceful Apr 21 13:30:53 qsandbox01 sshd[19503]: pam_unix(sshd:session): session closed for user mrankin Server Configuration OS: Ubuntu 9.10 OpenSSH: Ubuntu package version 1.5.1p1-6ubuntu2 Client Configuration OS: Mac OS X 10.6.3 Fabric ver 0.9 Vritualenv ver 1.4.7 pip ver 0.7 Thoughts on Cause of the Issue I don't know how long the problem has existed. However, I know that at one point I didn't have this problem. Things that have changed since then are that I have recreated my virtualenv's using virtualenv 1.4.7, virtualenvwrapper 2.1, and pip 0.7. Not sure if this is related, but it is a thought since I run my fabfiles from within a virtualenv.

    Read the article

  • Getting svn: E170000: Unrecognized URL scheme for my custom Svn Gradle plugin

    - by Ip Doh
    I wrote a custom gradle plugin using groovy to do basic svn tasks like, Checkout, Clean, Tag etc. The groovy class calls the svn command line client to do these operations, It works fine when i run it on my windows system but the same plugin gives the following error when i run it on a linux system (Centos). svn: E170000: Unrecognized URL scheme for '%22https://source.mycompany.net/svn/MyProject/trunk%22' Am able to make the same calls to the command line client through the command prompt or shell script without any issues. So what is the difference with Here is my code sample String command =String.format("svn co -r %d --non-interactive --trust-server-cert -- username %s --password %s --depth infinity \"%s\" \"%s\"", getRevision(), getUserName(), getUserPassword(), getSrcUrl(), getDir()); Process svnProcess = Runtime.getRuntime().exec(command); BufferedReader stdInput = new BufferedReader(new InputStreamReader(svnProcess.getInputStream())); BufferedReader stdError = new BufferedReader(new InputStreamReader(svnProcess.getErrorStream())); String statusOutputLine ="" while ((statusOutputLine = stdInput.readLine()) != null) { logger.quiet(" " + statusOutputLine); } while (( statusOutputLine = stdError.readLine()) != null) { logger.error(statusOutputLine) throw new Exception(statusOutputLine) } logger.quiet("Successfully Checked out the work space") i do have neon installed on the system -bash-4.1$ svn --version svn, version 1.6.11 (r934486) compiled Jun 25 2011, 11:30:15 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: ra_neon : Module for accessing a repository via WebDAV protocol using Neon. handles 'http' scheme handles 'https' scheme ra_svn : Module for accessing a repository using the svn network protocol. with Cyrus SASL authentication handles 'svn' scheme ra_local : Module for accessing a repository on local disk. handles 'file' scheme

    Read the article

  • Submitting R jobs using PBS

    - by Tony
    I am submitting a job using qsub that runs parallelized R. My intention is to have R programme running on 4 different cores rather than 8 cores. Here are some of my settings in PBS file: #PBS -l nodes=1:ppn=4 .... time R --no-save < program1.R > program1.log I am issuing the command "ta job_id" and I'm seeing that 4 cores are listed. However, the job occupies a large amount of memory(31944900k used vs 32949628k total). If I were to use 8 cores, the jobs got hang due to memory limitation. top - 21:03:53 up 77 days, 11:54, 0 users, load average: 3.99, 3.75, 3.37 Tasks: 207 total, 5 running, 202 sleeping, 0 stopped, 0 zombie Cpu(s): 30.4%us, 1.6%sy, 0.0%ni, 66.8%id, 0.0%wa, 0.0%hi, 1.2%si, 0.0%st Mem: 32949628k total, 31944900k used, 1004728k free, 269812k buffers Swap: 2097136k total, 8360k used, 2088776k free, 6030856k cached Here is a snapshot when issuing command ta job_id PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1794 x 25 0 6247m 6.0g 1780 R 99.2 19.1 8:14.37 R 1795 x 25 0 6332m 6.1g 1780 R 99.2 19.4 8:14.37 R 1796 x 25 0 6242m 6.0g 1784 R 99.2 19.1 8:14.37 R 1797 x 25 0 6322m 6.1g 1780 R 99.2 19.4 8:14.33 R 1714 x 18 0 65932 1504 1248 S 0.0 0.0 0:00.00 bash 1761 x 18 0 63840 1244 1052 S 0.0 0.0 0:00.00 20016.hpc 1783 x 18 0 133m 7096 1128 S 0.0 0.0 0:00.00 python 1786 x 18 0 137m 46m 2688 S 0.0 0.1 0:02.06 R How can I prevent other users to use the other 4 cores? I like to mask somehow that my job is using 8 cores with 4 cores idling. Could anyone kindly help me out on this? Can this be solved using pbs? Many Thanks

    Read the article

  • Uncompress OpenOffice files for better storage in version control

    - by Craig McQueen
    I've heard discussion about how OpenOffice (ODF) files are compressed zip files of XML and other data. So making a tiny change to the file can potentially totally change the data, so delta compression doesn't work well in version control systems. I've done basic testing on an OpenOffice file, unzipping it and then rezipping it with zero compression. I used the Linux zip utility for my testing. OpenOffice will still happily open it. So I'm wondering if it's worth developing a small utility to run on ODF files each time just before I commit to version control. Any thoughts on this idea? Possible better alternatives? Secondly, what would be a good and robust way to implement this little utility? Bash shell that calls zip (probably Linux only)? Python? Any gotchas you can think of? Obviously I don't want to accidentally mangle a file, and there are several ways that could happen. Possible gotchas I can think of: Insufficient disk space Some other permissions issue that prevents writing the file or temporary files ODF document is encrypted (probably should just leave these alone; the encryption probably also causes large file changes and thus prevents efficient delta compression)

    Read the article

  • mount not working properly on Cygwin

    - by Code Dance
    I have WinXP box and Cygwin installed on it. There are many network drive mapped on windows. when I execute mount command on windows (which uses the same mount executable as Cygwin) a get list of network mapped drives. But same when I do through Cygwin, I see only C: is mapped. On Windows command prompt. C:\CodeDance mount C:\cygwin\bin on /usr/bin type system (textmode) C:\cygwin\lib on /usr/lib type system (textmode) C:\cygwin on / type system (textmode) c:\Own on /own type system (binmode) v: on /cygdrive/v type system (binmode) c: on /cygdrive/c type system (textmode,noumount) k: on /cygdrive/k type system (textmode,noumount) l: on /cygdrive/l type system (textmode,noumount) m: on /cygdrive/m type system (textmode,noumount) o: on /cygdrive/o type system (textmode,noumount) x: on /cygdrive/x type system (textmode,noumount) y: on /cygdrive/y type system (textmode,noumount) z: on /cygdrive/z type system (textmode,noumount) Cygwin, on bash code@DANCE /cygdrive $ mount C:\cygwin\bin on /usr/bin type system (textmode) C:\cygwin\lib on /usr/lib type system (textmode) C:\cygwin on / type system (textmode) c:\Own on /own type system (binmode) v: on /cygdrive/v type system (binmode) c: on /cygdrive/c type system (textmode,noumount) The /cygdrive/v that shown mounted above is not accessible either.

    Read the article

  • mod_cgi , mod_fastcgi, mod_scgi , mod_wsgi, mod_python, FLUP. I don't know how many more. what is mo

    - by claws
    I recently learnt Python. I liked it. I just wanted to use it for web development. This thought caused all the troubles. But I like these troubles :) Coming from PHP world where there is only one way standardized. I expected the same and searched for python & apache. http://stackoverflow.com/questions/449055/setting-up-python-on-windows-apache says Stay away from mod_python. One common misleading idea is that mod_python is like mod_php, but for python. That is not true. So what is equivalent of mod_php in python? I need little clarification on this one http://stackoverflow.com/questions/219110/how-python-web-frameworks-wsgi-and-cgi-fit-together CGI, FastCGI and SCGI are language agnostic. You can write CGI scripts in Perl, Python, C, bash, or even Assembly :). So, I guess mod_cgi , mod_fastcgi, mod_scgi are their corresponding apache modules. Right? WSGI is some kind of optimized/improved inshort an efficient version specifically designed for python language only. In order to use this mod_wsgi is a way to go. right? This leaves out mod_python. What is it then? Apache - mod_fastcgi - FLUP (via CGI protocol) - Django (via WSGI protocol) Flup is another way to run with wsgi for any webserver that can speak FCGI, SCGI or AJP What is FLUP? What is AJP? How did Django come in the picture? These questions raise quetions about PHP. How is it actually running? What technology is it using? mod_php & mod_python what are the differences? In future if I want to use Perl or Java then again will I have to get confused? Kindly can someone explain things clearly and give a Complete Picture.

    Read the article

  • unix at command pass variable to shell script?

    - by Andrew
    Hi, I'm trying to setup a simple timer that gets started from a Rails Application. This timer should wait out its duration and then start a shell script that will start up ./script/runner and complete the initial request. I need script/runner because I need access to ActiveRecord. Here's my test lines in Rails output = `at #{(Time.now + 60).strftime("%H:%M")} < #{Rails.root}/lib/parking_timer.sh STRING_VARIABLE` return render :text => output Then my parking_timer.sh looks like this #!/bin/sh ~/PATH_TO_APP/script/runner -e development ~/PATH_TO_APP/lib/ParkingTimer.rb $1 echo "All Done" Finally, ParkingTimer.rb reads the passed variable with ARGV.each do|a| puts "Argument: #{a}" end The problem is that the Unix command "at" doesn't seem to like variables and only wants to deal with filenames. I either get one of two errors depending on how I position "s If I put quotes around the right hand side like so ... "~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE" I get, -bash: ~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE: No such file or directory I I leave the quotes out, I get, at: garbled time This is all happening on a Mac OS 10.6 box running Rails 2.3 & Ruby 1.8.6 I've already messed around w/ BackgrounDrb, and decided its a total PITA. I need to be able to cancel the job at any time before it is due.

    Read the article

  • Set up Gitosis, but can't clone

    - by Tim Rupe
    I've set up Gitosis on a remote Ubuntu box which I will refer to as linuxserver as my host in the following commands. I'm also connecting from a Windows box using Cygwin. I followed the instructions according to: http://scie.nti.st/2007/11/14/hosting-git-repositories-the-easy-and-secure-way I had no problems up until I needed to clone the gitosis-admin repository to my local machine git clone git@linuxserver:gitosis-admin.git When I do this, the command executes, but hangs there displaying nothing until I ctrl-c to get back to a command prompt. No messages are displayed at all. I'm pretty sure I have my ssh keys set up properly, because logging in using "ssh linuxserver" into my regular account works perfectly without asking for a password. Edit: Over the weekend I set up a near identical Ubuntu box at home, and had no problem setting up Gitosis. The only difference was that I was connecting from OSX instead of Cygwin. Edit: I've also discovered that when using the Bash Shell provided with "Git Extensions", I have no problems, so the issue definitely seems to be some kind of Cygwin conflict. Edit: Just an update, but about a month after posting this question, I switched to Mercurial, and found that I prefer it much more than git. Thanks for the suggestions, but I don't plan on going back to git to try any of them out.

    Read the article

  • Standardizing a Release/Tools group on a specific language

    - by grahzny
    I'm part of a six-member build and release team for an embedded software company. We also support a lot of developer tools, such as Atlassian's Fisheye, Jira, etc., Perforce, Bugzilla, AnthillPro, and a couple of homebrew tools (like my Django release notes generator). Most of the time, our team just writes little plugins for larger apps (ex: customize workflows in Anthill), long-term utility scripts (package up a release for QA), or things like Perforce triggers (don't let people check into a specific branch unless their change description includes a bug number; authenticate against Active Directory instead of Perforce's internal passwords). That's about the scale of our problems, although we sometimes tackle something slightly more sizable. My boss, who is reasonably technical, has asked us to standardize on one or two languages so we can more easily substitute for each other. He's advocating bash scripts and Perl, due to their universality and simplicity. I can see his point--we mostly do "glue", so why not use "glue" languages rather than saddle ourselves with something designed for much larger projects? Since some of the tools we work with are Java-based, we do need to use something that speaks JVM sometimes. (The path of least resistance for these projects is BeanShell and Groovy.) I feel a tremendous itch toward language advocacy, but I'm trying to avoid saying "We should use Python 'cause I like it and Perl is gross." Instead, I'm trying to come up with a good approach to defining our problem set: what problems do we solve with scripts? Would we benefit from a library of common functions by our team, or are most of our projects more isolated? What is it reasonable to expect my co-workers to learn? What languages give us the most ease of development and ease of modification? Can you folks suggest some useful ways to approach this problem, both for my own thinking process and to help me facilitate some brainstorming among my coworkers?

    Read the article

  • Script throwing unexpected operator when using mysqldump

    - by Astron
    A portion of a script I use to backup MySQL databases has stopped working correctly after upgrading a Debian box to 6.0 Squeeze. I have tested the backup code via CLI and it works fine. I believe it is in the selection of the databases before the backup occurs, possibly something to do with the $skipdb variable. If there is a better way to perform the function then I'm will to try something new. Any insight would be greatly appreciated. $ sudo ./script.sh [: 138: information_schema: unexpected operator [: 138: -1: unexpected operator [: 138: mysql: unexpected operator [: 138: -1: unexpected operator Using bash -x script here is one of the iterations: + for db in '$DBS' + skipdb=-1 + '[' test '!=' '' ']' + for i in '$IGGY' + '[' mysql == test ']' + : + '[' -1 == -1 ']' ++ /bin/date +%F + FILE=/backups/hostname.2011-03-20.mysql.mysql.tar.gz + '[' no = yes ']' + /usr/bin/mysqldump --single-transaction -u root -h localhost '-ppassword' mysql + /bin/tar -czvf /backups/hostname.2011-03-20.mysql.mysql.tar.gz mysql.sql mysql.sql + rm -f mysql.sql Here is the code. if [ $MYSQL_UP = "yes" ]; then echo "MySQL DUMP" >> /tmp/update.log echo "--------------------------------" >> /tmp/update.log DBS="$($MYSQL -u $MyUSER -h $MyHOST -p"$MyPASS" -Bse 'show databases')" for db in $DBS do skipdb=-1 if [ "$IGGY" != "" ] ; then for i in $IGGY do [ "$db" == "$i" ] && skipdb=1 || : done fi if [ "$skipdb" == "-1" ] ; then FILE="$DEST$HOST.`$DATE +"%F"`.$db.mysql.tar.gz" if [ $ENCRYPT = "yes" ]; then $MYSQLDUMP -u $MyUSER -h $MyHOST -p"$MyPASS" $db > $db.sql && $TAR -czvf - $db.sql | $OPENSSL enc -aes-256-cbc -salt -out $FILE.enc -k $ENC_PASS && rm -f $db.sql else $MYSQLDUMP --single-transaction -u $MyUSER -h $MyHOST -p"$MyPASS" $db > $db.sql && $TAR -czvf $FILE $db.sql && rm -f $db.sql fi fi done fi

    Read the article

  • How can I do such a typical unittest?

    - by Malcom.Z
    This is a simple structure in my project: MyAPP--- note--- __init__.py views.py urls.py test.py models.py auth-- ... template--- auth--- login.html register.html note--- noteshow.html media--- css--- ... js--- ... settings.py urls.py __init__.py manage.py I want to make a unittest which can test the noteshow page working propeyly or not. The code: from django.test import TestCase class Note(TestCase): def test_noteshow(self): response = self.client.get('/note/') self.assertEqual(response.status_code, 200) self.assertTemplateUsed(response, '/note/noteshow.html') The problem is that my project include an auth mod, it will force the unlogin user redirecting into the login.html page when they visit the noteshow.html. So, when I run my unittest, in the bash it raise an failure that the response.status_code is always 302 instead of 200. All right though through this result I can check the auth mod is running well, it is not like what I want it to be. OK, the question is that how can I make another unittest to check my noteshow.template is used or not? Thanks for all. django version: 1.1.1 python version: 2.6.4 Use Eclipse for MAC OS

    Read the article

  • Get current PHP executable from within script?

    - by benizi
    I want to run a PHP cli program from within PHP cli. On some machines where this will run, both php4 and php5 are installed. If I run the outer program as php5 outer.php I want the inner script to be run with the same php version. In Perl, I would use $^X to get the perl executable. It appears there's no such variable in PHP? Right now, I'm using $_SERVER['_'], because bash (and zsh) set the environment variable $_ to the last-run program. But, I'd rather not rely on a shell-specific idiom. UPDATE: Version differences are but one problem. If PHP isn't in PATH, for example, or isn't the first version found in PATH, the suggestions to find the version information won't help. Additionally, csh and variants appear to not set the $_ environment variable for their processes, so the workaround isn't applicable there. UPDATE 2: I was using $_SERVER['_'], until I discovered that it doesn't do the right thing under xargs (which makes sense... zsh sets it to the command it ran, which is xargs, not php5, and xargs doesn't change the variable). Falling back to using: $version = explode('.', phpversion()); $phpcli = "php{$version[0]}";

    Read the article

  • Unidentifiable Vim Keymap

    - by asdf.qwer
    Hi I'm trying to get rid of a pesky keymapping in vim, namely \c The mapping is only loaded for latex files, so it should be related to the latex-suite. It's annoying, because it can't type \cite without this keymap ruining everything. I can unmap it "manually" by typing: :unmap! \c But this doesn't work when I put that into my ~/.vimrc file because it says there's no such keymap. I think this is because the keymap is loaded after .vimrc, although I'm not sure. I've tried locate in bash to locate all files on my system that start have "vim" in their filename, and subsequently grep keyword $filename to find all references to keyword that should be relevant. The keyword I search for is "Traditional" because that's what the mapping is called (that's what I find by typing :map! in vim normal mode). It finds some entries that contain "Traditional" but nothing that corresponds to \c, except in the file: ~/.gnome2/gvim-sA9LOO-session.vim But this file is not used by vim when starting up, as far as I know. Anyone know any fix?

    Read the article

  • send arrow keys using ganymed ssh java

    - by José Ramón Pérez Rubio
    I am using Ganymed ssh to connect to a remote machine and apart from sending commands I need to send the arrows keys (left and right keys). I can send commands but when I send the arrows keys nothing happends. This is what I have: public boolean createShell() throws Exception { try { // ... m_session= connection.openSession(); m_commandWriter = new OutputStreamWriter(m_session.getStdin()); String encoding=m_commandWriter.getEncoding(); //encoding is UFT8 m_errorPipe=new SSHSyncPipe(m_session.getStderr()); m_outputPipe=new SSHSyncPipe(m_session.getStdout()); m_outputPipe.start(); m_errorPipe.start(); // m_session.requestPTY("bash"); m_session.requestDumbPTY(); m_session.startShell(); m_shellCreated=true; return true; } } So if I use m_commandWriter.write(ls"\r\n"); m_commandWriter.flush(); It works, but m_commandWriter.write(37);//37 is the code for left arrow m_commandWriter.flush(); Doesn't work. Does anyone know what I am doing wrong? Thank you

    Read the article

  • NHibernate IQueryable Collection as Property of Root

    - by Khalid Abuhakmeh
    Hello and thank you for taking the time to read this. I have a root object that has a property that is a collection. For example : I have a Shelf object that has Books. // now public class Shelf { public ICollection<Book> Books {get; set;} } // want public class Shelf { public IQueryable<Book> Books {get;set;} } What I want to accomplish is to return a collection that is IQueryable so that I can run paging and filtering off of the collection directly from the the parent. var shelf = shelfRepository.Get(1); var filtered = from book in shelf.Books where book.Name == "The Great Gatsby" select book; I want to have that query executed specifically by NHibernate and not a get all to load a whole collection and then parse it in memory (which is what currently happens when I use ICollection). The reasoning behind this is that my collection could be huge, tens of thousands of records, and a get all query could bash my database. I would like to do this implicitly so that when NHibernate sees and IQueryable on my class it knows what to do. I have looked at NHibernates Linq provider and currently I am making the decision to take large collections and split them into their own repository so that I can make explicit calls for filtering and paging. Linq To SQL offers something similar to what I'm talking about.

    Read the article

  • Perl: remove relative path components?

    - by jnylen
    I need to get Perl to remove relative path components from a Linux path. I've found a couple of functions that almost do what I want, but: File::Spec->rel2abs does too little. It does not resolve ".." into a directory properly. Cwd::realpath does too much. It resolves all symbolic links in the path, which I do not want. Perhaps the best way to illustrate how I want this function to behave is to post a bash log where FixPath is a hypothetical command that gives the desired output: '/tmp/test'$ mkdir -p a/b/c1 a/b/c2 '/tmp/test'$ cd a '/tmp/test/a'$ ln -s b link '/tmp/test/a'$ ls b link '/tmp/test/a'$ cd b '/tmp/test/a/b'$ ls c1 c2 '/tmp/test/a/b'$ FixPath . # rel2abs works here ===> /tmp/test/a/b '/tmp/test/a/b'$ FixPath .. # realpath works here ===> /tmp/test/a '/tmp/test/a/b'$ FixPath c1 # rel2abs works here ===> /tmp/test/a/b/c1 '/tmp/test/a/b'$ FixPath ../b # realpath works here ===> /tmp/test/a/b '/tmp/test/a/b'$ FixPath ../link/c1 # neither one works here ===> /tmp/test/a/link/c1 '/tmp/test/a/b'$ FixPath missing # should work for nonexistent files ===> /tmp/test/a/b/missing

    Read the article

  • Best terminal environment for Cygwin/Windows?

    - by Anders Sandvig
    Today I run Cygwin with rxvt using the following startup line: rxvt -bg black -sl 8192 -fg white -sr -g 150x56 -fn "Fixedsys" -e /usr/bin/bash --login -i This gives me a resizeable native Windows window which is much better than the standard "DOS box" the default cygwin.bat provides. However, the current configuration does have a couple of issues: I am not able to enter non-ASCII characters into the terminal window (i.e. æ, ø, å and Æ, Ø, Å, which I use semi-frequently. In fact, the terminal will not even accept them when I paste them into the window. If I paste a string like "bølle" (Norwegian for "bulley"), all I get is "blle". I am not able to render UTF-8 character, they only show as ?, even if they are supported by the font (i.e. when rendering the same characters in ISO-8859-1 they show just fine.). I am running English Windows Vista with locale and keyboard layout set to Norwegian (ISO-8859-1 character set?), but I've had the exact same issue on Windows 2000 and XP. Anyone knows how to fix this (i.e. a better way to configure rxvt)? Apart from the issues mentioned above, I'm very happy with rxvt, so if I find a way to resolve them I'd like to continue using it. However, if the issues are not (easily) solvable, are the any other good terminal solutions for Cygwin? Update The solution provided by Andy and Mattias (editing the .inputrc file) did solve the input problem, but output rendering is still an issue. Output is fine when I render in ISO-8859-1, but when using UTF-8 I only get ? for non-ASCII characters. This behavior is consistent between rxvt, urxvt (under Cygwin XFree X Server), mintty and PuttyCyg. Is there a similar configuration file where output encoding can be set (i.e. the equivalent of setting output locale on a Linux system)?

    Read the article

  • std::locale breakage on MacOS 10.6 with LANG=en_US.UTF-8

    - by fixermark
    I have a C++ application that I am porting to MacOSX (specifically, 10.6). The app makes heavy use of the C++ standard library and boost. I recently observed some breakage in the app that I'm having difficulty understanding. Basically, the boost filesystem library throws a runtime exception when the program runs. With a bit of debugging and googling, I've reduced the offending call to the following minimal program: #include <locale> int main ( int argc, char *argv [] ) { std::locale::global(std::locale("")); return 0; } This program fails when I run this through g++ and execute the resulting program in an environment where LANG=en_US.UTF-8 is set (which on my computer is part of the default bash session when I create a new console window). Clearing the environment variable (setenv LANG=) allows the program to run without issues. But I'm surprised I'm seeing this breakage in the default configuration. My questions are: Is this expected behavior for this code on MacOS 10.6? What would a proper workaround be? I can't really re-write the function because the version of the boost libraries we are using executes this statement internally as part of the filesystem library. For completeness, I should point out that the program from which this code was synthesized crashes when launched via the 'open' command (or from the Finder) but not when Xcode runs the program in Debug mode. edit The error given by the above code on 10.6.1 is: $ ./locale terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid Abort trap

    Read the article

  • rake task via cron problem loading rubygems

    - by Matenia Rossides
    I have managed to get a cron job to run a rake task by doing the following: cd /home/myusername/approotlocation/ && /usr/bin/rake sendnewsletter RAILS_ENV=development i have checked with which ruby and which rake to make sure the paths are correct (from bash) the job looks like it wants to run as i get the following email from the cron daemon when it completes Missing these required gems: chronic whenever searchlogic adzap-ar_mailer twitter gdata bitly ruby-recaptcha You're running: ruby 1.8.7.22 at /usr/bin/ruby rubygems 1.3.5 at /home/myusername/gems, /usr/lib/ruby/gems/1.8 Run `rake gems:install` to install the missing gems. (in /home/myusername/approotlocation) my custom rake file within lib/tasks is as follows: task :sendnewsletter => :environment do require 'rubygems' require 'chronic' require 'whenever' require 'searchlogic' require 'adzap-ar_mailer' require 'twitter' require 'gdata' require 'bitly' require 'ruby-recaptcha' @recipients = Subscription.all(:conditions => {:active => true}) for user in @recipients Email.send_later(:deliver_send_newsletter,user) end end with or without the require items, it still gives me the same error ... can anyone shed some light on this? or alternatively advise me on how to make a custom file within the script directory that will run this function (I already have a cron job working that will run and process all my delayed_jobs. Cheers!

    Read the article

  • How do I launch background jobs w/ paramiko?

    - by sophacles
    Here is my scenario: I am trying to automate some tasks using Paramiko. The tasks need to be started in this order (using the notation (host, task)): (A, 1), (B, 2), (C, 2), (A,3), (B,3) -- essentially starting servers and clients for some testing in the correct order. Further, because in the tests networking may get mucked up, and because I need some of the output from the tests, I would like to just redirect output to a file. In similar scenarios the common response is to use 'screen -m -d' or to use 'nohup'. However with paramiko's exec_cmd, nohup doesn't actually exit. Using: bash -c -l nohup test_cmd & doesnt work either, exec_cmd still blocks to process end. In the screen case, output redirection doesn't work very well, (actually, doesnt work at all the best I can figure out). So, after all that explanation, my question is: is there an easy elegant way to detach processes and capture output in such a way as to end paramiko's exec_cmd blocking? Update The dtach command works nicely for this!

    Read the article

  • Rails 3.0 console won't run

    - by Waheedi
    bash-3.2# rails console /opt/local/lib/ruby1.9/1.9.1/irb/completion.rb:9:in `require': dlopen(/opt/local/lib/ruby1.9/1.9.1/i386-darwin10/readline.bundle, 9): Library not loaded: /opt/local/lib/libncurses.5.dylib (LoadError) Referenced from: /opt/local/lib/ruby1.9/1.9.1/i386-darwin10/readline.bundle Reason: no suitable image found. Did find: /opt/local/lib/libncurses.5.dylib: no matching architecture in universal wrapper /usr/lib/libncurses.5.dylib: no matching architecture in universal wrapper - /opt/local/lib/ruby1.9/1.9.1/i386-darwin10/readline.bundle from /opt/local/lib/ruby1.9/1.9.1/irb/completion.rb:9:in `<top (required)>' from /opt/local/lib/ruby1.9/gems/1.9.1/gems/railties-3.0.0.beta3/lib/rails/commands/console.rb:3:in `require' from /opt/local/lib/ruby1.9/gems/1.9.1/gems/railties-3.0.0.beta3/lib/rails/commands/console.rb:3:in `<top (required)>' from /opt/local/lib/ruby1.9/gems/1.9.1/gems/railties-3.0.0.beta3/lib/rails/commands.rb:32:in `require' from /opt/local/lib/ruby1.9/gems/1.9.1/gems/railties-3.0.0.beta3/lib/rails/commands.rb:32:in `<top (required)>' from script/rails:9:in `require' from script/rails:9:in `<main>' while "rails server" works pretty well any help would be appreciated.

    Read the article

  • GIT clone repo across local file system

    - by Jon
    Hi all, I am a complete Noob when it comes to GIT. I have been just taking my first steps over the last few days. I setup a repo on my laptop, pulled down the Trunk from an SVN project (had some issues with branches, not got them working), but all seems ok there. I now want to be able to pull or push from the laptop to my main desktop. The reason being the laptop is handy on the train as I spend 2 hours a day travelling and can get some good work done. But my main machine at home is great for development. So I want to be able to push / pull from the laptop to the main computer when I get home. I thought the most simple way of doing this would be to just have the code folder shared out across the LAN and do: git clone file://192.168.10.51/code unfortunately this doesn't seem to be working for me: so I open a git bash cmd and type the above command, I am in C:\code (the shared folder for both machines) this is what I get back: Initialized empty Git repository in C:/code/code/.git/ fatal: 'C:/Program Files (x86)/Git/code' does not appear to be a git repository fatal: The remote end hung up unexpectedly How can I share the repository between the two machines in the most simple of ways. There will be other locations that will be official storage points and places where the other devs and CI server etc will pull from, this is just so that I can work on the same repo across two machines. Thanks

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >