Search Results

Search found 4783 results on 192 pages for 'bash completion'.

Page 93/192 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • If a jQuery function calls itself in its completion callback, is that a recursive danger to the stac

    - by NXT
    Hi, I'm writing a little jQuery component that animates in response to button presses and also should go automatically as well. I was just wondering if this function recursive or not, I can't quite work it out. function animate_next_internal() { $('#sc_thumbnails').animate( { top: '-=106' }, 500, function() { animate_next_internal(); } ); } My actual function is more complicated to allow for stops and starts, this is just a simplified example.

    Read the article

  • BASIC Menu-driven program repeates twice after successful completion of first task.

    - by Zazu
    Hello, Im using Plato3 to write C programs. Im creating a menu-driven program but want to test out the basic concept of getting it to work #include<stdio.h> #include<ctype.h> int function1(); main(){ char s; do{ puts("\n choose the following"); puts("(P)rint\n"); puts("(Q)uit\n"); scanf("%c",&s); s=toupper(s); switch (s){ case 'P' : function1(); break; case 'Q' : return -1; break; } }while (function1()==0); } int function1(){ printf("Hello World"); return 0; } The problem is that once function1() returns the value 0, the whole program is echoed ... why ? Example : Running the program gives this : Hello WorldHellow World choose the following (P)rint (Q)uit Hello World choose the following (P)rint (Q)uit -- Any idea why ? Please help, thanks !!!!

    Read the article

  • Does MultiThreading in Java takes much time for task completion?

    - by Geeta
    I have to search for a string in 10 large size files (in zip format 70 MB) and have to print the lines with the search string to corresponding 10 output files.(i.e. file 1 output should be in output_file1...file2--- output_file2). The same program takes 15 mins for a single file. But if use 10 threads to read 10 files and to write in 10 different files it should complete in 15 mins but its taking 40 mins. How can I solve this. Or multithreading will take this much time only?

    Read the article

  • Looking for an issue tracker / project management software that automatically manages start/completion dates based on priority/relationships

    - by user361910
    So, a little background. We are a small company with a half-dozen developers. We have been evaluating many project management / issue tracking software packages (TRAC, Redmine, FogBugz, etc) and trying to create a decent process/workflow for managing projects, adding features, fixing bugs, etc. I'd like to think our requirements are similar to most other companies our size. Essentially, what this comes down to is 1) An easy way for the PM and developers to track projects, issues, bugs, etc 2) An easy way for the PM and admin/executives to get a birds-eye view of progress and easily manage timelines, schedules, and priorities. After trying TRAC, we moved to Redmine. We found Redmine to be easier than track to administer and the ability to have sub-projects and sub-tickets is great. However, the big problem we ran into is the fact that it is very difficult to manage schedules and timelines. It seems like it would be incredibly time-intensive to manage because you have to manually enter a start date, estimated time, and end date for each ticket, project, etc. So if you setup a month's schedule based on priorities, what are you supposed to do when a particular ticket/issue/subproject takes up more time than was estimated. Right now, it appears I would have to go back in and MANUALLY change the start/end date of every single item. What would be ideal is to be able to set priorities/dependencies and estimated time on tickets/milestones, and have the software automatically manage the start/end dates. Does anyone know how to get Redmine to do this, or recommend a different software package that can do something like this!

    Read the article

  • How to build form completion/document merge application with Winforms or WPF?

    - by Mike
    I need to build an application that accepts user input data (such as name, address, amount, etc.) and then merges it with a pre-loaded document template (order form) and then prints this merged document. I can use Windows Forms or WPF for this project. Any suggestions on how best to approach this? I'm experienced with Winforms development, but don't have any idea how to handle merging the data to the document for printing.

    Read the article

  • Why doesn't "cd" work in a bash shell script?

    - by askgelal
    I'm trying to write a small script to change the current directory to my project directory: #!/bin/bash cd /home/askgelal/projects/java I saved this file as proj, changed the chmod, copied it to /usr/bin. When I call it by: proj, it does nothing. What am I doing wrong?

    Read the article

  • Cannot launch an application, 'No such file or directory' but it exists

    - by pst007x
    The folder exists, the application has been made executable. But when i run it I get the following message: pst007x@pst007x-Aspire-5741:~$ /home/pst007x/Applications/ClipGrab/clipgrab bash: /home/pst007x/Applications/ClipGrab/clipgrab: No such file or directory pst007x@pst007x-Aspire-5741:~$ Thanks NOTE: AS SUGGESTED BELOW pst007x@pst007x-Aspire-5741:~$ file /home/pst007x/Applications/ClipGrab/clipgrab /bin/bash /home/pst007x/Applications/ClipGrab/clipgrab: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, BuildID[sha1]=0x22c8628796d72d721cf46293fe1d83b965de6df0, stripped /bin/bash: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, BuildID[sha1]=0x7ea55c6b94d32a06887081649ec990fd70700455, stripped pst007x@pst007x-Aspire-5741:~$ NOTE: AS SUGGESTED BELOW pst007x@pst007x-Aspire-5741:~/Applications/ClipGrab$ ls -l total 588 -rwxrwxrwx 1 pst007x pst007x 388096 Mar 26 14:50 clipgrab -rwxrwxr-x 1 pst007x pst007x 194397 Feb 11 04:07 clipgrab-3.1.3.0.bz2 -rwxrwxr-x 1 pst007x pst007x 15981 Feb 13 00:46 Clipgrab icon.jpg pst007x@pst007x-Aspire-5741:~/Applications/ClipGrab$ NOTE: AS SUGGESTED BELOW pst007x@pst007x-Aspire-5741:~$ cd /home/pst007x/Applications/ClipGrab/ pst007x@pst007x-Aspire-5741:~/Applications/ClipGrab$ ./clipgrab bash: ./clipgrab: No such file or directory pst007x@pst007x-Aspire-5741:

    Read the article

  • ubuntu input/output error

    - by rplevy
    I'm having a problem with Ubuntu that I'm finding hard to troubleshoot for reasons that will become clear: reboot -bash: /sbin/reboot: Input/output error dmesg -bash: /bin/dmesg: Input/output error ps -e ps: error while loading shared libraries: /lib/libproc-3.2.8.so: cannot read file data: Input/output error lsof -bash: /usr/bin/lsof: Input/output error fsck -bash: /sbin/fsck: Input/output error badblocks -bash: /sbin/badblocks: Input/output error So I can't see what is going on, and I can't remotely reboot. What can I do to get to the bottom of this? Interestingly: init 0 Segmentation fault I can cat /var/syslog but not /var/log/messages or several other important files. less and more don't work, neither do tail or head, etc.

    Read the article

  • I've changed default shell but my terminal don't get it

    - by om-nom-nom
    Recently I've changed my default shell from bash to zsh like this: chsh -s /bin/zsh myname But when I invoke a new terminal (e.g. using ctrl+alt+T) I still have bash loaded: myname@machine:~$ cat /etc/passwd | grep myname myname:x:1000:1000:myname,,,:/home/myname:/bin/zsh myname@machine:~$ echo $SHELL /bin/bash zsh is installed and can be explicitly runned with zsh command. How to deal with that?

    Read the article

  • Library order is important

    - by Darryl Gove
    I've written quite extensively about link ordering issues, but I've not discussed the interaction between archive libraries and shared libraries. So let's take a simple program that calls a maths library function: #include <math.h int main() { for (int i=0; i<10000000; i++) { sin(i); } } We compile and run it to get the following performance: bash-3.2$ cc -g -O fp.c -lm bash-3.2$ timex ./a.out real 6.06 user 6.04 sys 0.01 Now most people will have heard of the optimised maths library which is added by the flag -xlibmopt. This contains optimised versions of key mathematical functions, in this instance, using the library doubles performance: bash-3.2$ cc -g -O -xlibmopt fp.c -lm bash-3.2$ timex ./a.out real 2.70 user 2.69 sys 0.00 The optimised maths library is provided as an archive library (libmopt.a), and the driver adds it to the link line just before the maths library - this causes the linker to pick the definitions provided by the static library in preference to those provided by libm. We can see the processing by asking the compiler to print out the link line: bash-3.2$ cc -### -g -O -xlibmopt fp.c -lm /usr/ccs/bin/ld ... fp.o -lmopt -lm -o a.out... The flag to the linker is -lmopt, and this is placed before the -lm flag. So what happens when the -lm flag is in the wrong place on the command line: bash-3.2$ cc -g -O -xlibmopt -lm fp.c bash-3.2$ timex ./a.out real 6.02 user 6.01 sys 0.01 If the -lm flag is before the source file (or object file for that matter), we get the slower performance from the system maths library. Why's that? If we look at the link line we can see the following ordering: /usr/ccs/bin/ld ... -lmopt -lm fp.o -o a.out So the optimised maths library is still placed before the system maths library, but the object file is placed afterwards. This would be ok if the optimised maths library were a shared library, but it is not - instead it's an archive library, and archive library processing is different - as described in the linker and library guide: "The link-editor searches an archive only to resolve undefined or tentative external references that have previously been encountered." An archive library can only be used resolve symbols that are outstanding at that point in the link processing. When fp.o is placed before the libmopt.a archive library, then the linker has an unresolved symbol defined in fp.o, and it will search the archive library to resolve that symbol. If the archive library is placed before fp.o then there are no unresolved symbols at that point, and so the linker doesn't need to use the archive library. This is why libmopt needs to be placed after the object files on the link line. On the other hand if the linker has observed any shared libraries, then at any point these are checked for any unresolved symbols. The consequence of this is that once the linker "sees" libm it will resolve any symbols it can to that library, and it will not check the archive library to resolve them. This is why libmopt needs to be placed before libm on the link line. This leads to the following order for placing files on the link line: Object files Archive libraries Shared libraries If you use this order, then things will consistently get resolved to the archive libraries rather than to the shared libaries.

    Read the article

  • Xrandr settings don't stick; bash script fails; Nvidia; disper doesn't work

    - by bcmcfc
    I have an Nvidia graphics card. Yeah, I know. This might have to change soon. I am trying to get the resolution to set correctly on my second monitor, which is a VGA monitor with a native resolution of 1440x900. I have set up this bash script which doesn't work: xrandr --newmode "1440" 106.50 1440 1528 1672 1904 900 903 909 934 -hsync +vsync xrandr --addmode -display VGA-1 1440 xrandr --output VGA-1 --mode 1440 It outputs: xrandr: cannot find mode 1440. It has worked previously when running the commands individually but after a restart it fails. I've also just installed disper as per this question but have no idea how to use it - the help suggests the command: disper -d VGA-1 -r "1440x900" should work, but it does not. It just outputs the help text, suggesting something is wrong but not saying what. Is there any way to get these horrific Nvidia drivers working properly with a resolution that is not detected by the tools?

    Read the article

  • Avoiding Nested Blocks

    - by Helium3
    If there is a view controller that wants to execute a number of block based tasks one at a time, how can the control be given back to the view controller after a completion block has executed. Lets say ViewControllerOne wants to execute a number of tasks, each relying on the result of the previous one, how would this control be given back to the viewcontroller after each completion block has been executed? I started thinking about this and I was heading towards a deeply nested block pattern that will surely only cause confusion to whoever else reads or tests it. A task executes and the completion block returns the result or error which is needed by the next task, which has its own completion task, that the next task relies on and so forth. How can the control be managed in one place, the viewcontroller? Would the completion block just call the next function that handles the next task, using a pointer to the view controller?

    Read the article

  • Puppet - Is it possible to use a global var to pull in a template with the same name?

    - by Mike Purcell
    I'm new to puppet. As such I am trying to work my way around the best way to setup my manifests that make sense. Following the DRY (don't repeat yourself) principle, I am trying to load common directives in one template, then load in environment specific directives from a file matching the environment. Basically like this: # nodes.pp node base_dev { $service_env = 'dev' } node 'service1.ownij.lan' inherits base_dev { include global_env_specific } class global_env_specific { include shell::bash } # modules/shell/bash.pp class shell::bash inherits shell { notify{"Service env: ${service_env}": } file { '/etc/profile.d/custom_test.sh': content => template('_global/prefix.erb', 'shell/bash/global.erb', 'shell/bash/$service_env.erb'), mode => 644 } } But every time I run puppet agent --test puppet complains that it can't find the shell/bash/$service_env.erb file, but I double checked that it exists. I know the var is accessible due to the notify statement outputting the expected value, so I suspect I am doing which is not allowed. I know I could have a single template.erb and pass variables to the template, which would work in this case because the custom.sh file is small and not many changes across environments, but for more complex configs (httpd, solr, etc) I'd prefer to access environment specific files. I am also aware that I can specify environment specific module paths, but I'd prefer to just handle this behavior at the template level, instead of having several, closely named directories. Thanks.

    Read the article

  • Execute process conditionally in Windows PowerShell (e.g. the && and || operators in Bash)

    - by Dustin
    I'm wondering if anybody knows of a way to conditionally execute a program depending on the exit success/failure of the previous program. Is there any way for me to execute a program2 immediately after program1 if program1 exits successfully without testing the LASTEXITCODE variable? I tried the -band and -and operators to no avail, though I had a feeling they wouldn't work anyway, and the best substitute is a combination of a semicolon and an if statement. I mean, when it comes to building a package somewhat automatically from source on Linux, the && operator can't be beaten: # Configure a package, compile it and install it ./configure && make && sudo make install PowerShell would require me to do the following, assuming I could actually use the same build system in PowerShell: # Configure a package, compile it and install it .\configure ; if ($LASTEXITCODE -eq 0) { make ; if ($LASTEXITCODE -eq 0) { sudo make install } } Sure, I could use multiple lines, save it in a file and execute the script, but the idea is for it to be concise (save keystrokes). Perhaps it's just a difference between PowerShell and Bash (and even the built-in Windows command prompt which supports the && operator) I'll need to adjust to, but if there's a cleaner way to do it, I'd love to know.

    Read the article

  • PHP shell_exec() - Run directly, or perform a cron (bash/php) and include MySQL layer?

    - by Jimbo
    Sorry if the title is vague - I wasn't quite sure how to word it! What I'm Doing I'm running a Linux command to output data into a variable, parse the data, and output it as an array. Array values will be displayed on a page using PHP, and this PHP page output is requested via AJAX every 10 seconds so, in effect, the data will be retrieved and displayed/updated every 10 seconds. There could be as many as 10,000 characters being parsed on every request, although this is usually much lower. Alternative Idea I want to know if there is a better* alternative method of retrieving this data every 10 seconds, as multiple users (<10) will be having this command executed automatically for them. A cronjob running on the server could execute either bash or php (which is faster?) to grab the data and store it in a MySQL database. Then, any AJAX calls to the PHP output would return values in the MySQL database rather than making a direct call to execute server code every 10 seconds. Why? I know there are security concerns with running execs directly from PHP, and (I hope this isn't micro-optimisation) I'm worried about CPU usage on the server. The server is running a sempron processor. Yes, they do still exist. Having this only execute when the user is on the page (idea #1) means that the server isn't running code that doesn't need to be run. However, is this slow and insecure? Just in case the type of linux command may be of assistance in determining it's efficiency: shell_exec("transmission-remote $host:$port --auth $username:$password -l"); I'm hoping that there are differences in efficiency and level of security with the two methods I have outlined above, and that this isn't just micro-micro-optimisation. If there are alternative methods that are better*, I'd love to learn about these! :)

    Read the article

  • Stored procedure performance randomly plummets; trivial ALTER fixes it. Why?

    - by gWiz
    I have a couple of stored procedures on SQL Server 2005 that I've noticed will suddenly take a significantly long time to complete when invoked from my ASP.NET MVC app running in an IIS6 web farm of four servers. Normal, expected completion time is less than a second; unexpected anomalous completion time is 25-45 seconds. The problem doesn't seem to ever correct itself. However, if I ALTER the stored procedure (even if I don't change anything in the procedure, except to perhaps add a space to the script created by SSMS Modify command), the completion time reverts to expected completion time. IIS and SQL Server are running on separate boxes, both running Windows Server 2003 R2 Enterprise Edition. SQL Server is Standard Edition. All machines have dual Xeon E5450 3GHz CPUs and 4GB RAM. SQL Server is accessed using its TCP/IP protocol over gigabit ethernet (not sure what physical medium). The problem is present from all web servers in the web farm. When I invoke the procedure from a query window in SSMS on my development machine, the procedure completes in normal time. This is strange because I was under the impression that SSMS used the same SqlClient driver as in .NET. When I point my development instance of the web app to the production database, I again get the anomalous long completion time. If my SqlCommand Timeout is too short, I get System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Question: Why would performing ALTER on the stored procedure, without actually changing anything in it, restore the completion time to less than a second, as expected? Edit: To clarify, when the procedure is running slow for the app, it simultaneously runs fine in SSMS with the same parameters. The only difference I can discern is login credentials (next time I notice the behavior, I'll be checking from SSMS with the same creds). The ultimate goal is to get the procs to sustainably run with expected speed without requiring occasional intervention. Resolution: I wanted to to update this question in case others are experiencing this issue. Following the leads of the answers below, I was able to consistently reproduce this behavior. In order to test, I utilize sp_recompile and pass it one of the susceptible sprocs. I then initiate a website request from my browser that will invoke the sproc with atypical parameters. Lastly, I initiate a website request to a page that invokes the sproc with typical parameters, and observe that the request does not complete because of a SQL timeout on the sproc invocation. To resolve this on SQL Server 2005, I've added OPTIMIZE FOR hints to my SELECT. The sprocs that were vulnerable all have the "all-in-one" pattern described in this article. This pattern is certainly not ideal but was a necessary trade-off given the timeframe for the project.

    Read the article

  • creating a heirarchy of terminals or workspaces

    - by intuited
    <rant This question occurred to me ('occurred' meaning 'whispered seductively in my ear for the 100th time') while using GNU-screen, so I'll make that my example. However this is a much more general question about user interfaces and what I perceive as a flawmissing feature in every implementation I've yet seen. I'm wondering if there is some way to create a heirarchy/tree of terminals in a screen session. EG I'd like to have something like 1 bash 1.1 bash 1.2 bash 2 bash 3 bash 3.1 bash 3.1.1 bash 3.1.2 bash It would be good if the terminals could be labelled instead of having to be navigated to via some arrangement that I suspect doesn't exist. So then you could jump to one using eg ^A:goto happydays or ^A:goto dykstra.angry. So to generalize that: Firefox, Chrome, Internet Explorer, gnome-terminal, roxterm, konsole, yakuake, OpenOffice, Microsoft Office, Mr. Snuffaluppagus's Funtime Carousel™, and Your Mom's Jam Browser™ all offer the ability to create a flat set of tabs containing documents of an identical nature: web pages, terminals, documents, fun rideable animals, and jams. GNU-screen implements the same functionality without using tabs. Linux and OS/X window managers provide the ability to organize windows into an array of workspaces, which amounts to again, the same deal. Over the past few years, this has become a more or less ubiquitous concept which has been righteously welcomed into the far reaches of the computer interface funfest. Heavy users of these systems quickly encounter a problem with it: the set of entities is flat. In the case of workspaces, an option may be available to create a 2d array. However none of these applications furnish their users with the ability to create heirarchies, similar to filesystem directory structures, containing instances of their particular contained type. I for one am consistently bothered by this, and am wondering if the community can offer some wisdom as to why this has not happened in any of the foremost collections of computational functionality our culture has yet produced. Or if perhaps it has and I'm just an ignorant savage. I'd like to be able to not only group things into a tree structure, but also to create references (aka symbolic links, aka pointers) from one part of the structure to another, as well as apply properties (eg default directory, colorscheme, ...) recursively downward from a given node. I see no reason why we shouldn't be able to save these structures as known sessions, and apply tags to particular instances. So then you can sort through them by tag, find them by name, or just use the arrow keys (with an appropriate modifier) to move left or right and in or out of a given level. Another key combo would serve to create a branch in the place of the current terminal/webpage/lifelike statue/spreadsheet/spreadsheet sheet/presentation/jam and move that entity into the new branch, then create a fresh one as a sibling to it: a second leaf node within the same branch node. They would get along well. I find it a bit astonishing that this hasn't happened yet, and the only reason I can venture as a guess is that the creators of these fine systems do not consider such functionality to be useful to a significant portion of their userbase. I posit that the probability that that such an assumption would be correct is pretty low. On the other hand, given the relative ease with which such structures can be implemented using modern libraries/languages, it doesn't seem likely that difficulty of implementation would be a major roadblock. If it could be done in 1972 or whenever within the constraints of a filesystem driver, it should be relatively painless to implement in 2010 in a fullblown application. Given that all of these systems are capable of maintaining a set of equivalent entities, it seems unlikely that a major infrastructure overhaul would be necessary in order to enable a navigable heirarchy of them. </rant Mostly I'm just looking to start up a discussion and/or brainstorming on this topic. Any ideas, examples, criticism, or analysis are quite welcome. * Mr. Snuffaluppagus's Funtime Carousel is a registered trademark of Children's Television Workshop Inc. * Your Mom's Jam Browser is a registered trademark of Your Mom Inc.

    Read the article

  • Does anyone know a way to interact with HP OV(NNM) with python, perl or bash?

    - by marc.riera
    Do anyone know if there is out there any API/library to access NNM database from perl or python? We have a NNM 7.53 which give us access to its data with its java based applet through http. And of course through the 'ovw' GUI interface. I've tried to use Mechanize and selenium2(webdriver) to automatize some checks. The pourpose is to integrate it with our other monitoring services on our "general master console". Many thanks. Marc

    Read the article

  • Could you share your emacs dot-files for web development

    - by Gok Demir
    Hi, could you kindly share your emacs dot-files for web development that works with CSS, HTML, JavaScript, PHP and if possible with Python Django. I really need complete setup. I looked nXhtml and its good on some parts (html code completion works but sucks on indentation and CSS code completion does not work and says tag table is empty most cases. I really need something that works: code completion works out of the box, git integration and pretty indentation and supports multi-mode for mixed HTML, CSS, JavaScript, PHP code.

    Read the article

  • Is there a way I can use $PATH as defined by my bash profile?

    - by Adam Backstrom
    I spend most of my day ssh'd into servers. I have a series of aliases/functions/scripts that allow me to type p hostname from the terminal and execute GNU screen(1) on the remote side, using the following command: exec ssh hostname -t 'screen -RD'` I've only recently noticed that ssh -t does not get my custom $PATH. Here's some terminal output: adam@workstation:~:0$ sh server 'echo $PATH' /home/adam/bin:/usr/local/bin:/bin:/usr/bin:/opt/git/bin:/opt/git/libexec/git-core adam@workstation:~:0$ ssh server -t 'echo $PATH' /usr/local/bin:/bin:/usr/bin Connection to uranus.plymouth.edu closed. My biggest problem is my custom aliases only try to execute screen, since I can't guarantee an absolute path, and my $PATH is structured so the shell should find the correct one. If my $PATH settings aren't honored, my scripts don't work. Is there a way I can use $PATH as defined by my .bashrc/.bash_profile? I believe PermitUserEnvironment is disabled.

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >