Search Results

Search found 2272 results on 91 pages for 'ru sh'.

Page 80/91 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • playframework auto-test Jenkins CI wait for completion?

    - by notbrain
    I am trying to set up Jenkins CI for a playframework.org application but am having trouble properly launching play after the auto-test command is run. The tests all run fine, but it seems as though my script is launching both play auto-test and play start --%ci at the same time. When the play start --%ci command runs, it gets a pid and everything, but it's not running. FILE: auto-test.sh, jenkins runs this with execute shell #!/bin/bash # pwd is jenkins workspace dir # change into approot dir cd customer-portal; # kill any previous play launches if [ -e "server.pid" ] then kill `cat server.pid`; rm -rf server.pid; fi # drop and re-create the DB mysql --user=USER --password=PASS --host=HOSTNAME < ../setupdb.sql # auto-test the most recent build /usr/local/lib/play/play auto-test; # this is inadequate for waiting for auto-test to complete? # how to wait for actual process completion? # sleep 60; wait; # Conditional start based on tests # Launch normal on pass, test on fail # if [ -e "./test-result/result.passed" ] then /usr/local/lib/play/play start --%ci; exit 0; else /usr/local/lib/play/play test; exit 1; fi

    Read the article

  • Can't connect to local MySQL server through socket

    - by Martin
    I was trying to tune the performance of a running mysql-server by running this command: mysqld_safe --key_buffer_size=64M --table_cache=256 --sort_buffer_size=4M --read_buffer_size=1M & After this i'm unable to connect mysql from the server where mysql is running. I get this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) However, luckily i can still connect to mysql remotely. So all my webservers still have access to mysql and are running without any problems. Because of this though i don't want to try to restart the mysql server since that will probably fuck everything up. Now i know that mysqld_safe is starting the mysql-server, and since the mysql server was already running i guess it's some kind of problem with two mysql servers running and listening to the same port. Is there some way to solve this problem without restarting the initial mysql server? UPDATE: This is what ps xa | grep "mysql" says: 11672 ? S 0:00 /bin/sh /usr/bin/mysqld_safe 11780 ? Sl 175:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 11781 ? S 0:00 logger -t mysqld -p daemon.error 12432 pts/0 R+ 0:00 grep mysql

    Read the article

  • Unable to set variables in bash script in Mac OSX

    - by cohortq
    Hello! I am attempting to automate moving files from a folder to a new folder automatically every night using a bash script run from applescript on a schedule. I am attempting to write a bash script on Mac OSX, and it keeps failing. In short this is what I have (all my ECHOs are for error checking): !/bin/bash folder = "ABC" useracct = 'test' day = date "+%d" month = date "+%B" year = date "+%Y" folderToBeMoved = "/users/$useracct/Documents/Archive/Primetime.eyetv" newfoldername = "/Volumes/Media/Network/$folder/$month$day$year" ECHO "Network is $network" $network ECHO "day is $day" ECHO "Month is $month" ECHO "YEAR is $year" ECHO "source is $folderToBeMoved" ECHO "dest is $newfoldername" mkdir $newfoldername cp -R $folderToBeMoved $newfoldername if [-f $newfoldername/Primetime.eyetv]; then rm $folderToBeMoved; fi Now my first problem is that I cannot set variables at all. Even literal ones where I just make it equal some literal. All my echos come out blank. I cannot grab the day, month, or year either,it comes out blank as well. I get an error saying that -f is not found. I get an error saying there is an unexpected end of file. I made the file and did a chmod u+x scriptname.sh I'm not sure why nothing is working at all. I am very new to this bash script on OSX, and only have experience with windows vbscript. Any help would be great, thanks!

    Read the article

  • SQL SERVER 2008 JOIN hints

    - by Nai
    Hi all, Recently, I was trying to optimise this query UPDATE Analytics SET UserID = x.UserID FROM Analytics z INNER JOIN UserDetail x ON x.UserGUID = z.UserGUID Estimated execution plan show 57% on the Table Update and 40% on a Hash Match (Aggregate). I did some snooping around and came across the topic of JOIN hints. So I added a LOOP hint to my inner join and WA-ZHAM! The new execution plan shows 38% on the Table Update and 58% on an Index Seek. So I was about to start applying LOOP hints to all my queries until prudence got the better of me. After some googling, I realised that JOIN hints are not very well covered in BOL. Therefore... Can someone please tell me why applying LOOP hints to all my queries is a bad idea. I read somewhere that a LOOP JOIN is default JOIN method for query optimiser but couldn't verify the validity of the statement? When are JOIN hints used? When the sh*t hits the fan and ghost busters ain't in town? What's the difference between LOOP, HASH and MERGE hints? BOL states that MERGE seems to be the slowest but what is the application of each hint? Thanks for your time and help people! I'm running SQL Server 2008 BTW. The statistics mentioned above are ESTIMATED execution plans.

    Read the article

  • Xapian gem failed to install on Mac OS X Snow Leopard + macports

    - by goodwill
    I have installed xapian-core + xapian-bindings with macports on snow leopard, then trying to install xapian gem fails: Building native extensions. This could take a while... ERROR: Error installing xapian: ERROR: Failed to build gem native extension. /opt/ruby-enterprise/bin/ruby extconf.rb ./configure --with-ruby checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... ./install-sh -c -d checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... yes checking how to create a ustar tar archive... gnutar checking build system type... i386-apple-darwin10.3.0 checking host system type... i386-apple-darwin10.3.0 checking for style of include used by make... GNU checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking for a sed that does not truncate output... /usr/bin/sed checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for ld used by gcc... /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld checking if the linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) is GNU ld... no checking for /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm checking whether ln -s works... yes checking how to recognize dependent libraries... pass_all checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking for g++... g++ checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking dependency style of g++... gcc3 checking how to run the C++ preprocessor... g++ -E checking the maximum length of command line arguments... 196608 checking command to parse /usr/bin/nm output from gcc object... ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip checking for dsymutil... dsymutil checking for nmedit... nmedit checking for -single_module linker flag... yes checking for -exported_symbols_list linker flag... yes checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fno-common checking if gcc PIC flag -fno-common works... yes checking if gcc static flag -static works... no checking if gcc supports -c -o file.o... yes checking whether the gcc linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) supports shared libraries... yes checking dynamic linker characteristics... darwin10.3.0 dyld checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no checking whether we are using the GNU C++ compiler... (cached) yes checking whether g++ accepts -g... (cached) yes checking dependency style of g++... (cached) gcc3 checking for xapian-config... /opt/local/bin/xapian-config checking /opt/local/bin/xapian-config works... yes checking whether to enable maintainer-specific portions of Makefiles... no checking for ruby1.8... no checking for ruby... /opt/ruby-enterprise/bin/ruby checking /opt/ruby-enterprise/bin/ruby version... ruby 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10.2.0], MBARI 0x6770, Ruby Enterprise Edition 2009.10 checking for /opt/ruby-enterprise/lib/ruby/1.8/i686-darwin10.2.0/ruby.h... yes checking ruby/io.h... no checking whether to use -fvisibility=hidden... yes configure: creating ./config.status config.status: creating Makefile config.status: creating xapian-version.h config.status: creating python/Makefile config.status: creating python/docs/Makefile config.status: creating php/Makefile config.status: creating php/docs/Makefile config.status: creating java/Makefile config.status: creating java/native/Makefile config.status: creating java/org/xapian/Makefile config.status: creating java/org/xapian/errors/Makefile config.status: creating java/org/xapian/examples/Makefile config.status: creating java-swig/Makefile config.status: creating tcl8/Makefile config.status: creating tcl8/docs/Makefile config.status: creating tcl8/pkgIndex.tcl config.status: creating csharp/Makefile config.status: creating csharp/docs/Makefile config.status: creating csharp/AssemblyInfo.cs config.status: creating ruby/Makefile config.status: creating ruby/docs/Makefile config.status: creating xapian-bindings.spec config.status: creating python/generate-python-exceptions config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands *** Building bindings for languages: ruby make make all-recursive Making all in ruby make all-recursive Making all in docs make all-am make[5]: Nothing to be done for `all-am'. /bin/sh ../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I. -I.. -I/opt/ruby-enterprise/lib/ruby/1.8/i686-darwin10.2.0 -I/opt/ruby-enterprise/lib/ruby/1.8/i686-darwin10.2.0 -fno-strict-aliasing -Wall -Wno-unused -Wno-uninitialized -fvisibility=hidden -I/opt/local/include -g -O2 -MT xapian_wrap.lo -MD -MP -MF .deps/xapian_wrap.Tpo -c -o xapian_wrap.lo xapian_wrap.cc ../libtool: line 393: /bin/sed: No such file or directory ../libtool: line 393: /bin/sed: No such file or directory ../libtool: line 792: /bin/sed: No such file or directory : ignoring unknown tag ../libtool: line 792: /bin/sed: No such file or directory *** Warning: inferring the mode of operation is deprecated. *** Future versions of Libtool will require --mode=MODE be specified. ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1156: /bin/sed: No such file or directory : compile: cannot determine name of library object from `' make[4]: *** [xapian_wrap.lo] Error 1 make[3]: *** [all-recursive] Error 1 make[2]: *** [all] Error 2 make[1]: *** [all-recursive] Error 1 make: *** [all] Error 2 extconf.rb:3:in `system!': unhandled exception from extconf.rb:6 Gem files will remain installed in /opt/ruby-enterprise/lib/ruby/gems/1.8/gems/xapian-1.0.15 for inspection. Results logged to /opt/ruby-enterprise/lib/ruby/gems/1.8/gems/xapian-1.0.15/gem_make.out Any idea pal?

    Read the article

  • Automatically stashing

    - by Readonly
    The section Last links in the chain: Stashing and the reflog in http://ftp.newartisans.com/pub/git.from.bottom.up.pdf recommends stashing often to take snapshots of your work in progress. The author goes as far as recommending that you can use a cron job to stash your work regularly, without having to do a stash manually. The beauty of stash is that it lets you apply unobtrusive version control to your working process itself: namely, the various stages of your working tree from day to day. You can even use stash on a regular basis if you like, with something like the following snapshot script: $ cat <<EOF > /usr/local/bin/git-snapshot #!/bin/sh git stash && git stash apply EOF $ chmod +x $_ $ git snapshot There’s no reason you couldn’t run this from a cron job every hour, along with running the reflog expire command every week or month. The problem with this approach is: If there are no changes to your working copy, the "git stash apply" will cause your last stash to be applied over your working copy. There could be race conditions between when the cron job executes and the user working on the working copy. For example, "git stash" runs, then the user opens the file, then the script's "git stash apply" is executed. Does anybody have suggestions for making this automatic stashing work more reliably?

    Read the article

  • perl issuing os command with defined variables

    - by Vinnie Biros
    I am adding functionality into my scripts so that they can use kerberos authentication to run automatically and use secure protocols when executing. I have my functionality working for shell scripts that do exactly what i want, however i am having issues porting it to perl to work within my perl scripts as i am new to perl. Here is my working shell code and trying to get the same functionality in perl: #!/bin/sh ticketFileName=`basename $0-$$` #set filename variable to name of script plus the PID krb5CacheLocation=/tmp/$ticketFileName #set ticket cache location to /tmp + script name /usr/share/centrifydc/kerberos/bin/kinit -c $krb5CacheLocation -kt /root/.ssh/someaccount.keytab someaccount #get TGT and specifiy ticket cache location on kinit export KRB5CCNAME=$krb5CacheLocation #set the KRB5CCNAME variable to tell ssh where to look What i have attempted in perl: #!/usr/bin/perl my $ticketFileName = `basename $0-$$`; my $krb5CacheLocation = '/tmp/'.$ticketFileName; `export KRB5CCNAME=$krb5CacheLocation`; `/usr/share/centrifydc/kerberos/bin/kinit -c $krb5CacheLocation -kt /root/.ssh/unixmap0000.keytab unixmap0000`; Seems it is not liking the passed variable that i am referencing in the OS command. Anyone have any ideas or suggestions?

    Read the article

  • autoconf libtool library linker path incorrect (need drive-letter) for MinGW ld.exe in Cygwin

    - by Tam Toucan
    I use autoconf and when the target is mingw I was using the -mno-cygwin flag. This has been removed so I'm trying to using the mingw tool chain. The problem is the linker isn't finding my libraries /bin/sh ../../../libtool --tag=CXX --mode=link mingw32-g++ -g -Wall -pedantic -DNOMINMAX -D_REENTRANT -DWIN32 -I /usr/local/include/w32api -L/usr/local/lib/w32api -o testRandom.exe testRandom.o -L../../../lib/Random -lRandom libtool: link: mingw32-g++ -g -Wall -pedantic -DNOMINMAX -D_REENTRANT -DWIN32 -I /usr/local/include/w32api -o .libs/testRandom.exe testRandom.o -L/usr/local/lib/w32api -L/home/Tam/src/3DS_Games/lib/Random -lRandom D:\cygwin\opt\MinGW\bin\..\lib\gcc\mingw32\3.4.5\..\..\..\..\mingw32\bin\ld.exe: cannot find -lRandom To link this from the command line using the mingw linker the -L path needs the drive letter i.e mingw32-ld testRandom.o -LD:/home/Tam/src/3DS_Games/lib/Random -lRandom works. The -L path is generated from the makefile.am's which have LDADD = -L$(top_builddir)/lib/Random -lRandom However I can't find how to set top_builddir to a relative path or to start it with the drive letter (my autoconf skills are weak). As a tempoary "solution" I have removed the use of libtool. I could hack a $(DRIVE_LETTER) infront of every -L option, but I'd like to find something better.

    Read the article

  • Data in linux FIFO seems lost

    - by Utoah
    Hi, I have a bash script which wants to do some work in parallel, I did this by putting each job in an subshell which is run in the background. While the number of job running simultaneously should under some limit, I achieve this by first put some lines in a FIFO, then just before forking the subshell, the parent script is required to read a line from this FIFO. Only after it gets a line can it fork the subshell. Up to now, everything works fine. But when I tried to read a line from the FIFO in the subshell, it seems that only one subshell can get a line, even if there are apparently more lines in the FIFO. So I wonder why cannot other subshell(s) read a line even when there are more lines in the FIFO. My testing code looks something like this: #!/bin/sh fifo_path="/tmp/fy_u_test2.fifo" mkfifo $fifo_path #open fifo for r/w at fd 6 exec 6 $fifo_path process_num=5 #put $process_num lines in the FIFO for ((i=0; i<${process_num}; i++)); do echo "$i" done &6 delay_some(){ local index="$1" echo "This is what u can see. $index \n" sleep 20; } #In each iteration, try to read 2 lines from FIFO, one from this shell, #the other from the subshell for i in 1 2 do date /tmp/fy_date #If a line can be read from FIFO, run a subshell in bk, otherwise, block. read -u6 echo " $$ Read --- $REPLY --- from 6 \n" /tmp/fy_date { delay_some $i #Try to read a line from FIFO read -u6 echo " $$ This is in child # $i, read --- $REPLY --- from 6 \n" /tmp/fy_date } & done And the output file /tmp/fy_date has content of: Mon Apr 26 16:02:18 CST 2010 32561 Read --- 0 --- from 6 \n Mon Apr 26 16:02:18 CST 2010 32561 Read --- 1 --- from 6 \n 32561 This is in child # 1, read --- 2 --- from 6 \n

    Read the article

  • shell scripting: search/replace & check file exist

    - by johndashen
    I have a perl script (or any executable) E which will take a file foo.xml and write a file foo.txt. I use a Beowulf cluster to run E for a large number of XML files, but I'd like to write a simple job server script in shell (bash) which doesn't overwrite existing txt files. I'm currently doing something like #!/bin/sh PATTERN="[A-Z]*0[1-2][a-j]"; # this matches foo in all cases todo=`ls *.xml | grep $PATTERN -o`; isdone=`ls *.txt | grep $PATTERN -o`; whatsleft=todo - isdone; # what's the unix magic? #tack on the .xml prefix with sed or something #and then call the job server; jobserve E "$whatsleft"; and then I don't know how to get the difference between $todo and $isdone. I'd prefer using sort/uniq to something like a for loop with grep inside, but I'm not sure how to do it (pipes? temporary files?) As a bonus question, is there a way to do lookahead search in bash grep? To clarify: so the simplest way to do what i'm asking is (in pseudocode) for i in `/bin/ls *.xml` do replace xml suffix with txt if [that file exists] add to whatsleft list end done

    Read the article

  • rake test:units fails with status ()

    - by ander163
    New user, haven't been building tests as I go, so I'm an idiot. The application is running, but the tests fail. Here is what appears to be relevant: .... ** Execute test:units /usr/local/bin/ruby -I"lib:test" "/usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/unit/event_test.rb" "test/unit/helpers/calendar1_helper_test.rb" "test/unit/helpers/events_helper_test.rb" "test/unit/helpers/homepage_helper_test.rb" "test/unit/helpers/main_helper_test.rb" "test/unit/helpers/mobile_helper_test.rb" "test/unit/helpers/notes_helper_test.rb" "test/unit/helpers/password_resets_helper_test.rb" "test/unit/helpers/projects_helper_test.rb" "test/unit/helpers/search_helper_test.rb" "test/unit/helpers/start_helper_test.rb" "test/unit/helpers/superadmin_helper_test.rb" "test/unit/helpers/tasks_helper_test.rb" "test/unit/helpers/user_sessions_helper_test.rb" "test/unit/helpers/users_helper_test.rb" "test/unit/note_test.rb" "test/unit/notifier_test.rb" "test/unit/project_test.rb" "test/unit/task_test.rb" "test/unit/user_session_test.rb" "test/unit/user_test.rb" /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement /usr/lib/ruby/gems/1.8/gems/hpricot-0.6.164/lib/universal-java1.6/fast_xs.bundle: [BUG] Segmentation fault ruby 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10.2.0] rake aborted! Command failed with status (): [/usr/local/bin/ruby -I"lib:test" "/usr/loc...] /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:995:in sh' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1010:incall'

    Read the article

  • populate a tree view with an xml file

    - by syedsaleemss
    Im using .net windows form application. I have an xml file.I want to populate a tree view with data from a xml file. I am doing this using the following code. private void button1_Click(object sender, EventArgs e) { try { this.Cursor = System.Windows.Forms.Cursors.WaitCursor; //string strXPath = "languages"; string strRootNode = "Treeview Sample"; OpenFileDialog Dlg = new OpenFileDialog(); Dlg.Filter = "All files(*.*)|*.*|xml file (*.xml)|*.txt"; Dlg.CheckFileExists = true; string xmlfilename = ""; if (Dlg.ShowDialog() == DialogResult.OK) { xmlfilename = Dlg.FileName; } // Load the XML file. //XmlDocument dom = new XmlDocument(); //dom.Load(xmlfilename); XmlDocument doc = new XmlDocument(); doc.Load(xmlfilename); string rootName = doc.SelectSingleNode("/*").Name; textBox4.Text = rootName.ToString(); //XmlNode root = dom.LastChild; //textBox4.Text = root.Name.ToString(); // Load the XML into the TreeView. this.treeView1.Nodes.Clear(); this.treeView1.Nodes.Add(new TreeNode(strRootNode)); TreeNode tNode = new TreeNode(); tNode = this.treeView1.Nodes[0]; XmlNodeList oNodes = doc.SelectNodes(textBox4.Text); XmlNode xNode = oNodes.Item(0).ParentNode; AddNode(ref xNode, ref tNode); this.treeView1.CollapseAll(); this.treeView1.Nodes[0].Expand(); this.Cursor = System.Windows.Forms.Cursors.Default; } catch (Exception ex) { this.Cursor = System.Windows.Forms.Cursors.Default; MessageBox.Show(ex.Message, "Error"); } } private void AddNode(ref XmlNode inXmlNode, ref TreeNode inTreeNode) { // Recursive routine to walk the XML DOM and add its nodes to a TreeView. XmlNode xNode; TreeNode tNode; XmlNodeList nodeList; int i; // Loop through the XML nodes until the leaf is reached. // Add the nodes to the TreeView during the looping process. if (inXmlNode.HasChildNodes) { nodeList = inXmlNode.ChildNodes; for (i = 0; i <= nodeList.Count - 1; i++) { xNode = inXmlNode.ChildNodes[i]; inTreeNode.Nodes.Add(new TreeNode(xNode.Name)); tNode = inTreeNode.Nodes[i]; AddNode(ref xNode, ref tNode); } } else { inTreeNode.Text = inXmlNode.OuterXml.Trim(); } } My xml file is this:"hello.xml" - - abc hello how ru - def i m fine - ghi how abt u Now after using the above code I am able to populate the tree view. But I dont like to populate the complete xml file. I should get only till languages language key value I don't want abc how are you etc..... I mean to say the leaf nodes. Please help me

    Read the article

  • Performance of java on different hardware?

    - by tangens
    In another SO question I asked why my java programs run faster on AMD than on Intel machines. But it seems that I'm the only one who has observed this. Now I would like to invite you to share the numbers of your local java performance with the SO community. I observed a big performance difference when watching the startup of JBoss on different hardware, so I set this program as the base for this comparison. For participation please download JBoss 5.1.0.GA and run: jboss-5.1.0.GA/bin/run.sh (or run.bat) This starts a standard configuration of JBoss without any extra applications. Then look for the last line of the start procedure which looks like this: [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221634)] Started in 25s:264ms Please repeat this procedure until the printed time is somewhat stable and post this line together with some comments on your hardware (I used cpu-z to get the infos) and operating system like this: java version: 1.6.0_13 OS: Windows XP Board: ASUS M4A78T-E Processor: AMD Phenom II X3 720, 2.8 GHz RAM: 2*2 GB DDR3 (labeled 1333 MHz) GPU: NVIDIA GeForce 9400 GT disc: Seagate 1.5 TB (ST31500341AS) Use your votes to bring the fastest configuration to the top. I'm very curious about the results. EDIT: Up to now only a few members have shared their results. I'd really be interested in the results obtained with some other architectures. If someone works with a MAC (desktop) or runs an Intel i7 with less than 3 GHz, please once start JBoss and share your results. It will only take a few minutes.

    Read the article

  • stdexcept On Android

    - by David R.
    I'm trying to compile SoundTouch on Android. I started with this configure line: ./configure CPPFLAGS="-I/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/include/" LDFLAGS="-Wl,-rpath-link=/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/lib -L/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/lib -nostdlib -lc" --host=arm-eabi --enable-shared=yes CFLAGS="-nostdlib -O3 -mandroid" host_alias=arm-eabi --no-create --no-recursion Because the Android NDK targets ARM, I also had to change the Makefile to remove the -msse2 flags to progress. When I run 'make', I get: /bin/sh ../../libtool --tag=CXX --mode=compile arm-eabi-g++ -DHAVE_CONFIG_H -I. -I../../include -I../../include -I/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/include/ -O3 -fcheck-new -I../../include -g -O2 -MT FIRFilter.lo -MD -MP -MF .deps/FIRFilter.Tpo -c -o FIRFilter.lo FIRFilter.cpp libtool: compile: arm-eabi-g++ -DHAVE_CONFIG_H -I. -I../../include -I../../include -I/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/include/ -O3 -fcheck-new -I../../include -g -O2 -MT FIRFilter.lo -MD -MP -MF .deps/FIRFilter.Tpo -c FIRFilter.cpp -o FIRFilter.o FIRFilter.cpp:46:21: error: stdexcept: No such file or directory FIRFilter.cpp: In member function 'virtual void soundtouch::FIRFilter::setCoefficients(const soundtouch::SAMPLETYPE*, uint, uint)': FIRFilter.cpp:177: error: 'runtime_error' is not a member of 'std' FIRFilter.cpp: In static member function 'static void* soundtouch::FIRFilter::operator new(size_t)': FIRFilter.cpp:225: error: 'runtime_error' is not a member of 'std' make[2]: *** [FIRFilter.lo] Error 1 make[1]: *** [all-recursive] Error 1 make: *** [all-recursive] Error 1 This isn't very surprising, since the -nostdlib flag was required. Android seems to have neither stdexcept nor stdlib. How can I get past this block of compiling SoundTouch? At a guess, there may be some flag I don't know about that I should use. I could refactor the code not to use stdexcept. There may be a way to pull in the original stdexcept source and reference that. I might be able to link to a precompiled stdexcept library.

    Read the article

  • Script throwing unexpected operator when using mysqldump

    - by Astron
    A portion of a script I use to backup MySQL databases has stopped working correctly after upgrading a Debian box to 6.0 Squeeze. I have tested the backup code via CLI and it works fine. I believe it is in the selection of the databases before the backup occurs, possibly something to do with the $skipdb variable. If there is a better way to perform the function then I'm will to try something new. Any insight would be greatly appreciated. $ sudo ./script.sh [: 138: information_schema: unexpected operator [: 138: -1: unexpected operator [: 138: mysql: unexpected operator [: 138: -1: unexpected operator Using bash -x script here is one of the iterations: + for db in '$DBS' + skipdb=-1 + '[' test '!=' '' ']' + for i in '$IGGY' + '[' mysql == test ']' + : + '[' -1 == -1 ']' ++ /bin/date +%F + FILE=/backups/hostname.2011-03-20.mysql.mysql.tar.gz + '[' no = yes ']' + /usr/bin/mysqldump --single-transaction -u root -h localhost '-ppassword' mysql + /bin/tar -czvf /backups/hostname.2011-03-20.mysql.mysql.tar.gz mysql.sql mysql.sql + rm -f mysql.sql Here is the code. if [ $MYSQL_UP = "yes" ]; then echo "MySQL DUMP" >> /tmp/update.log echo "--------------------------------" >> /tmp/update.log DBS="$($MYSQL -u $MyUSER -h $MyHOST -p"$MyPASS" -Bse 'show databases')" for db in $DBS do skipdb=-1 if [ "$IGGY" != "" ] ; then for i in $IGGY do [ "$db" == "$i" ] && skipdb=1 || : done fi if [ "$skipdb" == "-1" ] ; then FILE="$DEST$HOST.`$DATE +"%F"`.$db.mysql.tar.gz" if [ $ENCRYPT = "yes" ]; then $MYSQLDUMP -u $MyUSER -h $MyHOST -p"$MyPASS" $db > $db.sql && $TAR -czvf - $db.sql | $OPENSSL enc -aes-256-cbc -salt -out $FILE.enc -k $ENC_PASS && rm -f $db.sql else $MYSQLDUMP --single-transaction -u $MyUSER -h $MyHOST -p"$MyPASS" $db > $db.sql && $TAR -czvf $FILE $db.sql && rm -f $db.sql fi fi done fi

    Read the article

  • Modify shell script to monitor/ping multiple ip addresses

    - by Alex
    Alright so I need to constantly monitor multiple routers and computers, to make sure they remain online. I have found a great script here that will notify me via growl(so i can get instant notifications on my phone) if a single ip cannot be pinged. I have been attempting to modify the script to ping multiple addresses, with little luck. I'm having trouble trying to figure out how to ping a down server while the script keeps watching the online servers. any help would be greatly appreciated. I haven't done much shell scripting so this is quite new to me. Thanks #!/bin/sh #Growl my Router alive! #2010 by zionthelion73 [at] gmail . com #use it for free #redistribute or modify but keep these comments #not for commercial purposes iconpath="/path/to/router/icon/file/internet.png" # path must be absolute or in "./path" form but relative to growlnotify position # document icon is used, not document content # Put the IP address of your router here localip=192.168.1.1 clear echo 'Router avaiability notification with Growl' #variable avaiable=false com="################" #comment prefix for logging porpouse while true; do if $avaiable then echo "$com 1) $localip avaiable $com" echo "1" while ping -c 1 -t 2 $localip do sleep 5 done growlnotify -s -I $iconpath -m "$localip is offline" avaiable=false else echo "$com 2) $localip not avaiable $com" #try to ping the router untill it come back and notify it while !(ping -c 1 -t 2 $localip) do echo "$com trying.... $com" sleep 5 done echo "$com found $localip $com" growlnotify -s -I $iconpath -m "$localip is online" avaiable=true fi sleep 5 done

    Read the article

  • How to combine twill and python into one code that could be run on "Google App Engine"?

    - by brilliant
    Hello everybody!!! I have installed twill on my computer (having previously installed Python 2.5) and have been using it recently. Python is installed on disk C on my computer: C:\Python25 And the twill folder (“twill-0.9”) is located here: E:\tmp\twill-0.9 Here is a code that I’ve been using in twill: go “some website’s sign-in page URL” formvalue 2 userid “my login” formvalue 2 pass “my password” submit go “URL of some other page from that website” save_html result.txt This code helps me to log in to one website, in which I have an account, record the HTML code of some other page of that website (that I can access only after logging in), and store it in a file named “result.txt” (of course, before using this code I firstly need to replace “my login” with my real login, “my password” with my real password, “some website’s sign-in page URL” and “URL of some other page from that website” with real URLs of that website, and number 2 with the number of the form on that website that is used as a sign-in form on that website’s log-in page) This code I store in “test.twill” file that is located in my “twill-0.9” folder: E:\tmp\twill-0.9\test.twill I run this file from my command prompt: python twill-sh test.twill Now, I also have installed “Google App Engine SDK” from “Google App Engine” and have also been using it for awhile. For example, I’ve been using this code: import hashlib m = hashlib.md5() m.update("Nobody inspects") m.update(" the spammish repetition ") print m.hexdigest() This code helps me transform the phrase “Nobody inspects the spammish repetition” into md5 digest. Now, how can I put these two pieces of code together into one python script that I could run on “Google App Engine”? Let’s say, I want my code to log in to a website from “Google App Engine”, go to another page on that website, record its HTML code (that’s what my twill code does) and than transform this HTML code into its md5 digest (that’s what my second code does). So, how can I combine those two codes into one python code? I guess, it should be done somehow by importing twill, but how can it be done? Can a python code - the one that is being run by “Google App Engine” - import twill from somewhere on the internet? Or, perhaps, twill is already installed on “Google App Engine”?

    Read the article

  • Translating CURL to FLEX HTTPRequests

    - by Joshua
    I am trying to convert from some CURL code to FLEX/ActionScript. Since I am 100% ignorant about CURL and 50% ignorant about Flex and 90% ignorant on HTTP in general... I'm having some significant difficulty. The following CURL code is from http://code.google.com/p/ga-api-http-samples/source/browse/trunk/src/v2/accountFeed.sh I have every reason to believe that it's working correctly. USER_EMAIL="[email protected]" #Insert your Google Account email here USER_PASS="secretpass" #Insert your password here googleAuth="$(curl https://www.google.com/accounts/ClientLogin -s \ -d Email=$USER_EMAIL \ -d Passwd=$USER_PASS \ -d accountType=GOOGLE \ -d source=curl-accountFeed-v2 \ -d service=analytics \ | awk /Auth=.*/)" feedUri="https://www.google.com/analytics/feeds/accounts/default\ ?prettyprint=true" curl $feedUri --silent \ --header "Authorization: GoogleLogin $googleAuth" \ --header "GData-Version: 2" The following is my abortive attempt to translate the above CURL to AS3 var request:URLRequest=new URLRequest("https://www.google.com/analytics/feeds/accounts/default"); request.method=URLRequestMethod.POST; var GoogleAuth:String="$(curl https://www.google.com/accounts/ClientLogin -s " + "-d [email protected] " + "-d Passwd=secretpass " + "-d accountType=GOOGLE " + "-d source=curl-accountFeed-v2" + "-d service=analytics " + "| awk /Auth=.*/)"; request.requestHeaders.push(new URLRequestHeader("Authorization", "GoogleLogin " + GoogleAuth)); request.requestHeaders.push(new URLRequestHeader("GData-Version", "2")); var loader:URLLoader=new URLLoader(); loader.dataFormat=URLLoaderDataFormat.BINARY; loader.addEventListener(Event.COMPLETE, GACompleteHandler); loader.addEventListener(IOErrorEvent.IO_ERROR, GAErrorHandler); loader.addEventListener(SecurityErrorEvent.SECURITY_ERROR, GAErrorHandler); loader.load(request); This probably provides you all with a good laugh, and that's okay, but if you can find any pity on me, please let me know what I'm missing. I readily admit functional ineptitude, therefore letting me know how stupid I am is optional.

    Read the article

  • Unknown user 'app' with capistrano

    - by trobrock
    This is my first time trying to set up capistrano to deploy a rails application. I am deploying from my local machine to my remote server that has the repo, web, app, and mysql servers all on the same machine. I am following this walk through: http://www.capify.org/index.php/From_The_Beginning I get to the command cap deploy:start Then I get this error: *** [err :: example.com] sudo: unknown user: app command finished failed: "sh -c 'cd /var/www/example/current && sudo -p '\\''sudo password: '\\'' -u app nohup script/spin'" on example.com Am I supposed to add an 'app' user, or is there a way of changing what user the command runs as? This is my deploy.rb: set :application, "example" set :repository, "[email protected]:example.git" set :user, "trobrock" set :branch, 'master' set :deploy_to, "/var/www/example" set :scm, :git # Or: `accurev`, `bzr`, `cvs`, `darcs`, `git`, `mercurial`, `perforce`, `subversion` or `none` role :web, "example.com" # Your HTTP server, Apache/etc role :app, "example.com" # This may be the same as your `Web` server role :db, "example.com", :primary => true # This is where Rails migrations will run And obviously everywhere it says example.com is my servers hostname and every it just says example is the app name.

    Read the article

  • Android ANR keyDispatchingTimedOut Error while continuous tapping on screen.

    - by user519846
    Hi All, I am getting Application Not Responding (ANR) dialog while continuous tapping on the screen. There is no view on the screen where i am tapping. Frequency of this issue is less but still i am not able to remove it completely. Here i am attaching the log what i caught during this error. ERROR/ActivityManager(1322): ANR in com.test.mj.and.ui (com.test.mj.and.ui/.TermsAndCondActivity) ERROR/ActivityManager(1322): Reason: keyDispatchingTimedOut ERROR/ActivityManager(1322): Parent: com.test.mj.and.ui/.SplashActivity ERROR/ActivityManager(1322): Load: 6.59 / 6.37 / 5.21 ERROR/ActivityManager(1322): CPU usage from 11430ms to 2196ms ago: ERROR/ActivityManager(1322): rtal.mj.and.ui: 9% = 7% user + 1% kernel / faults: 649 minor ERROR/ActivityManager(1322): system_server: 4% = 2% user + 2% kernel / faults: 10 minor ERROR/ActivityManager(1322): logcat: 3% = 1% user + 1% kernel / faults: 675 minor 1 major ERROR/ActivityManager(1322): synaptics_wq: 1% = 0% user + 1% kernel ERROR/ActivityManager(1322): ami304d: 1% = 0% user + 0% kernel ERROR/ActivityManager(1322): .process.lghome: 1% = 0% user + 0% kernel / faults: 47 minor ERROR/ActivityManager(1322): sync_supers: 0% = 0% user + 0% kernel ERROR/ActivityManager(1322): droid.DunServer: 0% = 0% user + 0% kernel / faults: 6 minor ERROR/ActivityManager(1322): events/0: 0% = 0% user + 0% kernel ERROR/ActivityManager(1322): oid.inputmethod: 0% = 0% user + 0% kernel / faults: 2 minor ERROR/ActivityManager(1322): m.android.phone: 0% = 0% user + 0% kernel / faults: 2 minor ERROR/ActivityManager(1322): ndroid.settings: 0% = 0% user + 0% kernel ERROR/ActivityManager(1322): sh: 0% = 0% user + 0% kernel / faults: 110 minor ERROR/ActivityManager(1322): -flush-179:0: 0% = 0% user + 0% kernel ERROR/ActivityManager(1322): TOTAL: 19% = 13% user + 6% kernel WARN/WindowManager(1322): Continuing to wait for key to be dispatched WARN/WindowManager(1322): No window to dispatch pointer action 1 Can anyone please help me to solve this issue? Thanks in advance.

    Read the article

  • write to fifo/pipe from shell, with timeout

    - by Tim
    I have a pair of shell programs that talk over a named pipe. The reader creates the pipe when it starts, and removes it when it exits. Sometimes, the writer will attempt to write to the pipe between the time that the reader stops reading and the time that it removes the pipe. reader: while condition; do read data <$PIPE; do_stuff; done writer: echo $data >>$PIPE reader: rm $PIPE when this happens, the writer will hang forever trying to open the pipe for writing. Is there a clean way to give it a timeout, so that it won't stay hung until killed manually? I know I can do #!/bin/sh # timed_write <timeout> <file> <args> # like "echo <args> >> <file>" with a timeout TIMEOUT=$1 shift; FILENAME=$1 shift; PID=$$ (X=0; # don't do "sleep $TIMEOUT", the "kill %1" doesn't kill the sleep while [ "$X" -lt "$TIMEOUT" ]; do sleep 1; X=$(expr $X + 1); done; kill $PID) & echo "$@" >>$FILENAME kill %1 but this is kind of icky. Is there a shell builtin or command to do this more cleanly (without breaking out the C compiler)?

    Read the article

  • Cygwin Cruisecontrol cannot execute commands

    I'm having what I hope to be a simple problem. However, it's had me stumped all day. I'm working with cruisecontrol in windows, being set up through Cygwin. I have some CC experience in the linux platform and much of what I'm doing is very similar. However, most any command I try to execute in the config.xml file's Schedule section is giving an error. Here's the exception: ExecBuilder - Could not execute command: /cygdrive/d/Program\ Files/Subversion/bin/svn net.sourceforge.cruisecontrol.CruiseControlException: Encountered an IO exception while attempting to execute 'net.sourceforge.cruisecontrol.builders.ExecScript@b80f1c'. CruiseControl cannot continue. at net.sourceforge.cruisecontrol.builders.ScriptRunner.runScript(ScriptRunner.java:133) Here are some examples of commands I've tried to run which give this type of error. <exec command="${CCLoc}/projects/${project.name}/IOSdllScript"/> -Runs a script that I tested outside of the cruisecontrol.bat and it runs. Includes #!/bin/sh as the first line <exec command="${CCLoc}/projects/${project.name}/EmptyFile"/> -Essentially an empty text file, proving that the problem had nothing to do with my script. <exec command="/cygdrive/d/Program\ Files/Subversion/bin/svn" args="cleanup" workingdir="${svndir}"/> -Trys svn cleanup on a directory. I double checked the pathing and spelling. One command that I tested worked and didn't give this error. That command was touch. <exec command="touch" args="ABC.txt"/> I'm not sure why only touch seems to work and nothing else does. Thank you for your help.

    Read the article

  • How do I process the configure file when cross-compiling with mingw?

    - by vy32
    I have a small open source program that builds with an autoconf configure script. I ran configure I tried to compile with: make CC="/opt/local/bin/i386-mingw32-g++" That didn't work because the configure script found include files that were not available to the mingw system. So then I tried: ./configure CC="/opt/local/bin/i386-mingw32-g++" But that didn't work; the configure script gives me this error: ./configure: line 5209: syntax error near unexpected token `newline' ./configure: line 5209: ` *_cv_*' Because of this code: # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\(a-zA-Z_a-zA-Z0-9_*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_* fi Which is generated then the AC_OUTPUT is called. Any thoughts? Is there a correct way to do this?

    Read the article

  • Safe way to set computed environment variables

    - by sfink
    I have a bash script that I am modifying to accept key=value pairs from stdin. (It is spawned by xinetd.) How can I safely convert those key=value pairs into environment variables for subprocesses? I plan to only allow keys that begin with a predefined prefix "CMK_", to avoid IFS or any other "dangerous" variable getting set. But the simplistic approach function import () { local IFS="=" while read key val; do case "$key" in CMK_*) eval "$key=$val";; esac done } is horribly insecure because $val could contain all sorts of nasty stuff. This seems like it would work: shopt -s extglob function import () { NORMAL_IFS="$IFS" local IFS="=" while read key val; do case "$key" in CMK_*([a-zA-Z_]) ) IFS="$NORMAL_IFS" eval $key='$val' IFS="=" ;; esac done } but (1) it uses the funky extglob thing that I've never used before, and (2) it's complicated enough that I can't be comfortable that it's secure. My goal, to be specific, is to allow key=value settings to pass through the bash script into the environment of called processes. It is up to the subprocesses to deal with potentially hostile values getting set. I am modifying someone else's script, so I don't want to just convert it to Perl and be done with it. I would also rather not change it around to invoke the subprocesses differently, something like #!/bin/sh ...start of script... perl -nle '($k,$v)=split(/=/,$_,2); $ENV{$k}=$v if $k =~ /^CMK_/; END { exec("subprocess") }' ...end of script...

    Read the article

  • Python import error: Symbol not found, but the symbol <s>is</s> *is not* present in the file

    - by Autopulated
    I get this error when I try to import ssrc.spread: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/ssrc/_spread.so, 2): Symbol not found: __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE The file in question (_spread.so) includes the symbol: $ nm _spread.so | grep _ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE U __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE U __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE (twice because the file is a fat ppc/x86 binary) EDIT: okay, as James points out, the U means that the symbol is undefined but required by the object file. With some more digging I've noticed (where I should have looked first...) these linker errors during compilation: CC=g++ CXX=g++ g++-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -O3 -I../.. -I../.. -I/usr/local/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -O2 -I/usr/local/include -std=c++98 -pipe -fno-gnu-keywords -fvisibility-inlines-hidden -o SsrcSpread.o -c SsrcSpread.cc CC=g++ CXX=g++ /bin/sh ../../libtool --tag=CXX --mode=link g++-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -bundle -undefined dynamic_lookup -F/Library/Frameworks -framework Python \ -pthread -D_REENTRANT -pedantic -Wall -Wno-long-long -Winline -Woverloaded-virtual -Wold-style-cast -Wsign-promo -L../../ssrc -lssrcspread -L/usr/local/lib -ltspread-core -o _spread.so SsrcSpread.o mkdir .libs g++-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -bundle -undefined dynamic_lookup -F/Library/Frameworks -framework Python -pthread -D_REENTRANT -pedantic -Wall -Wno-long-long -Winline -Woverloaded-virtual -Wold-style-cast -Wsign-promo -o _spread.so SsrcSpread.o -Wl,-bind_at_load -L/Dev/libssrcspread-1.0.6/ssrc /Dev/libssrcspread-1.0.6/ssrc/.libs/libssrcspread.a -L/usr/local/lib -ltspread-core ld: warning: in ~/Dev/libssrcspread-1.0.6/ssrc/.libs/libssrcspread.a, file was built for unsupported file format which is not the architecture being linked (ppc) ld: warning: in /Developer/SDKs/MacOSX10.4u.sdk/usr/local/lib/libtspread-core.dylib, file was built for unsupported file format which is not the architecture being linked (ppc) ld: warning: in /Dev/libssrcspread-1.0.6/ssrc/.libs/libssrcspread.a, file was built for unsupported file format which is not the architecture being linked (i386) ld: warning: in /Developer/SDKs/MacOSX10.4u.sdk/usr/local/lib/libtspread-core.dylib, file was built for unsupported file format which is not the architecture being linked (i386) I'm also not entirely sure that the 10.4 sdk is the right one for compiling python modules (but switching to 10.6 didn't seem to help).

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >