Search Results

Search found 6493 results on 260 pages for 'git bash'.

Page 20/260 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Is git-annex appropriate for my scenario?

    - by Karel Bílek
    I have a git repository with source codes I want to put in the open on github. However, I also have gigabytes of data that I don't want to have in the open and in the repo - they are big, they are proprietary, they are "burdened" with copyrights and so on. However, those are also logically "part of the same project" and I wish to have some control over their history (basically, what git already does). Right now, I have them in the directory "data" in the repository and I have the directory ignored and I resign on getting them to git. However, I have read about git-annex and it seems it can do what I want. So, I have two questions. Is git annex appropriate for me? How exactly should I use git annex for my scenario? Meaning - which commands should I use and how? I have tried to read the official documentation but it talks about use cases that I don't care about. I have the data on one computer only and I don't think I will be moving them soon (it's nice to have the possibility, but it's not why I want to use git annex). Also, the documentation is pretty hard to read.

    Read the article

  • What does this svn2git error mean?

    - by Hisham
    I am trying to import my repository from svn to git using svn2git, but it seems like it's failing when it hits a branch. What's the problem? Found possible branch point: https://s.aaa.com/repo/trunk/project => https://s.aaa.com/repo/branches/project-beta1.0, 128 Use of uninitialized value in substitution (s///) at /opt/local/libexec/git-core/git-svn line 1728. Use of uninitialized value in concatenation (.) or string at /opt/local/libexec/git-core/git-svn line 1728. refs/remotes/trunk: 'https://s.aaa.com/repo' not found in '' Running command: git branch -l --no-color * master Running command: git branch -r --no-color trunk Running command: git checkout trunk Note: checking out 'trunk'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b new_branch_name HEAD is now at f4e6268... Changing svn repository in cap files Running command: git branch -D master Deleted branch master (was f4e6268). Running command: git checkout -f -b master Switched to a new branch 'master' Running command: git gc Counting objects: 450, done. Delta compression using up to 2 threads. Compressing objects: 100% (368/368), done. Writing objects: 100% (450/450), done. Total 450 (delta 63), reused 450 (delta 63)

    Read the article

  • Differences between fish and bash to pass commandline arguments to alias functions?

    - by NES
    From the answers to my other question here i learned about the possibility to pass commandline arguments to a alias function in Bash. In Fish i can edit an alias by editing the file config.fish in ~/.config/fish directory and adding a line like this alias lsp='ls -ah --color=always | less -R;' and it works perfectly. This should be the equivalent to editing ~/.bash_aliases in bash But when i try to setup an alias function to pass arguments like this alias lsp='_(){ ls -ah --color=always $* | less -R; }; _' it doesn't work for fish? Are there any differences between fish and bash in the way to setup an alias to pass commandline arguments that prevent this second alias from working with fish instead of bash?

    Read the article

  • Bash History not containing all history and blank after reboot, how to resolve?

    - by TryTryAgain
    I've recently upgraded from 13.04 to 13.10 and realized my terminal bash history is not surviving reboots. cat ~/.bash_history gave me a permissions denied error. I, possibly unnecessarily or wrongly, issued a chmod 777 ~/.bash_history to see if that would help...and although I could then cat and read some contents it contained not much of anything as far as history. I also tried sudo rm ~/.bash_history after reading bash history not being preserved Strangely, after doing that, I typed a few test commands, ls, ls -lah ... and upon pressing the up arrow to go back through history it contained those two commands as well as the odd history from some far off time in the past but very few results and not the hundreds of commands I typed earlier in the day. Is there a new place bash history is stored? How can removing ~/.bash_history not get rid of the commands that are somehow lingering? I am not certain, but I believe my root bash history is acting normal. My user bash history is what's causing me trouble. Any help and guidance in tracking down and solving this problem is appreciated.

    Read the article

  • Gerrit, git and reviewing whole branch

    - by liori
    I'm now learning Gerrit (which is the first code review tool I use). Gerrit requires a reviewed change to consist of a single commit. My feature branch has about 10 commits. The gerrit-prefered way is to squash those 10 commits into a single one. However this way if the commit will be merged into the target branch, the internal history of that feature branch will be lost. For example, I won't be able to use git-bisect to bisect into those commits. Am I right? I am a little bit worried about this state of things. What is the rationale for this choice? Is there any way of doing this in Gerrit without losing history?

    Read the article

  • Is there a difference between "." and "source" in bash, after all?

    - by ysap
    I was looking for the difference between the "." and "source" builtin commands and a few sources (e.g., in this discussion, and the bash manpage) suggest that these are just the same. However, following a problem with environment variables, I conducted a test. I created a file testenv.sh that contains: #!/bin/bash echo $MY_VAR In the command prompt, I performed the following: > chmod +x testenv.sh > MY_VAR=12345 > ./testenv.sh > source testenv.sh 12345 > MY_VAR=12345 ./testenv.sh 12345 [note that the 1st form returned an empty string] So, this little experiment suggests that there is a difference after all, where for the "source" command, the child environment inherits all the variables from the parent one, where for the "." it does not. Am I missing something, or is this is an undocumented/deprecated feature of bash? [ GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu) ]

    Read the article

  • Why some user functions don't get recognised by bash?

    - by strapakowsky
    I can define a function like: myfunction () { ls -R "$1" ; } And then myfunction . just works. But if I do echo "myfunction ." | sh echo "myfunction ." | bash the messages are: sh: myfunction: not found bash: line 1: myfunction: command not found Why? And how can I call a function that comes from a string if not by piping it to sh or bash? I know there is this command source, but I am confused of when I should use source and when sh or bash. Also, I cannot pipe through source. To add to confusion, there is this command . that seems to have nothing to do with the "." that means "current directory".

    Read the article

  • Useful versioning scheme for a git project?

    - by Oliver Weiler
    I have a small github project, which I need to add an option to to output some version number on the commandline. The problem is I have no idea how to "compute" the version number. Is this some random process? Should I just start at 1.0 (probably creating a tag or something), and put a number after . for fixes? I know this question is a bit vague... I just had never to deal with this, and want to use some sane versioning scheme. EDIT Im also interested into how to update this version number automatically, maybe using something like a git hook.

    Read the article

  • Managing multiple people working on a project with GIT

    - by badZoke
    I'm very new to GIT/GitHub (as new as starting yesterday). I would like to know what is the best way to manage multiple people working on the same project with Github. Currently I'm managing one project with four developers. How do I go about the workflow and making sure everything is in sync? (Note: All developers will have one universal account.) Does each developer need to be on a different branch? Will I be able to handle 2 people working on the same file? Please post a detailed answer, I'm not a shy reader. I need to understand this well.

    Read the article

  • Git push current branch to a remote with Heroku

    - by cmaughan
    I'm trying to create a staging branch on Heroku, but there's something I don't quite get. Assuming I've already created a heroku app and setup the remote to point to staging-remote, If I do: git checkout -b staging staging-remote/master I get a local branch called 'staging' which tracks staging-remote/master - or that's what I thought.... But: git remote show staging-remote Gives me this: remote staging Fetch URL: [email protected]:myappname.git Push URL: [email protected]:myappname.git HEAD branch: master Remote branch: master tracked Local branch configured for 'git pull': staging-remote merges with remote master Local ref configured for 'git push': master pushes to master (up to date) As you can see, the pull looks reasonable, but the default push does not. It implies that if I do: git push staging-remote I'm going to push my local master branch up to the staging branch. But that's not what I want.... Basically, I want to merge updates into my staging branch, then easily push it to heroku without having to specify the branch like so: git push staging-remote mybranch:master The above isn't hard to do, but I want to avoid accidentally doing the previous push and pushing the wrong branch... This is doubly important for the production branch I'd like to create! I've tried messing with git config, but haven't figured out how to get this right yet...

    Read the article

  • How do you organise multiple git repositories?

    - by dbr
    With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup. The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos) I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup) Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?

    Read the article

  • What are the steps to setup git-http-backend w/ Apache on Windows?

    - by Jordan
    I would like setup a Git server using the "Smart-HTTP" approach. However, I'm having difficulties getting it to work in Windows, and I'm new to Apache. My httpd.conf, in part: SetEnv GIT_PROJECT_ROOT "d:/repositories" SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ "C:/Program Files/Git/libexec/git-core/git-http-backend.exe" <VirtualHost 172.16.0.5:80> <LocationMatch "^/git/.*/git-receive-pack$"> AuthType Basic AuthName "Git Access" Require group committers </LocationMatch> </VirtualHost> Could someone provide the steps to setup a Git server using git-http-backend on Windows?

    Read the article

  • Delete until previous punctuation mark in Bash

    - by hekevintran
    In Bash, Ctrl + W will erase the last word. Bash considers words to be delimited by spaces. This means that if the cursor is at the end of the string "cd /dir1/dir2/dir3" and you hit Ctrl + W you will be left with "cd ". Is there a Bash shortcut (custom defined is okay) that will leave me with "cd /dir1/dir2/"?

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • Simple bash scripting

    - by Richard Cotton
    I'm trying to get to grips with bash scripting via cygwin. My script is about as simple as it gets. I change the directory to the root of my C drive, and print the new location. #!/usr/bin/bash cd /cygdrive/c pwd is saved in the file chdir.sh in my home directory. I then call ./chdir.sh from the bash prompt. This results in the error : No such file or directorygdrive/c /cygdrive/c/Documents and Settings/rcotton I definitely have a C drive, and the command cd /cygdrive/c works when I call it directly from the bash prompt. I realise that this problem is likely stupidly simple; please can you tell me what I'm doing wrong.

    Read the article

  • Accessing the output of a Bash pipe with 'read'

    - by Karthik
    I'm trying to pipe some data from a Bash pipe into a Bash variable using the read command, like this: $ echo "Alexander the Grape" | read quot $ echo $quot $ But quot is empty. Some Googling revealed that this is not a bug; it's an intended feature of Bash. (Section E5 in the FAQ.) But when I tried the same thing in zsh, it worked. (Ditto for ksh.) Is there any way to make this work in Bash? I really don't want to have to type: $ quot=$(echo "Alexander the Grape") Especially for long commands.

    Read the article

  • Call 'script' command and exit it from within a bash script

    - by William Jamieson
    I'm using the linux 'script' command http://www.linuxcommand.org/man_pages/script1.html to log all input and output in an interactive bash script. At the moment I have to call the script command, then run my bash script, then exit. I want to run the script and exit commands from within the actual bash script itself. How can I do this? I've tried script -a but that doesn't work for interactive scripts. Any assistance would be greatly appreciated.

    Read the article

  • Bash script 'while read' loop causes 'broken pipe' error when run with GNU Parallel

    - by Joe White
    According to the GNU Parallel mailing list this is not a GNU Parallel-specific problem. They suggested that I post my problem here. The error I'm getting is a "broken pipe" error, but I feel I should first explain the context of my problem and what causes this error. It happens when trying to use any bash script containing a 'while read' loop in GNU Parallel. I have a basic bash script like this: #!/bin/bash # linkcheck.sh while read domain do host "$domain" done Assume that I want to pipe in a large list (250mb say). cat urllist | ./linkcheck.sh Running host command on 250mb worth of URLs is rather slow. To speed things up I want to break up the input into chunks before piping it and then run multiple jobs in parallel. GNU Parallel is capable of doing this. cat urllist | parallel --pipe -j0 parallel ./linkcheck.sh {} {} is substituted by the contents of urllist line-by-line. Assume that my systems default setup is capable of running 500ish jobs per instance of parallel. To get round this limitation we can parallelize Parallel itself: cat urllist | parallel -j10 --pipe parallel -j0 ./linkcheck.sh {} This will run 5000'ish jobs. It will also, sadly, cause the error "broken pipe" (bash FAQ). Yet the script starts to work if I remove the while read loop and take input directly from whatever is fed into {} e.g., #!/bin/bash # linkchecker.sh domain="$1" host "$1" Why will it not work with a while read loop? Is it safe to just turn off the SIGPIPE signal to stop the "broken pipe" message, or will that have side effects such as data corruption? Thanks for reading.

    Read the article

  • How to pass bash script arguments to a subshell

    - by Ralf Holly
    I have a wrapper script that does some work and then passes the original parameters on to another tool: #!/bin/bash # ... other_tool -a -b "$@" This works fine, unless the "other tool" is run in a subshell: #!/bin/bash # ... bash -c "other_tool -a -b $@" If I call my wrapper script like this: wrapper.sh -x "blah blup" then, only the first orginal argument (-x) is handed to "other_tool". In reality, I do not create a subshell, but pass the original arguments to a shell on an Android phone, which shouldn't make any difference: #!/bin/bash # ... adb sh -c "other_tool -a -b $@"

    Read the article

  • git, egit, submodules, and symlinks -- how should shared sub-projects be handled in eclipse?

    - by Autophil
    Question: what's the best way to handle sub-projects in eclipse when using git for SCM? Here's the situation. I have a few git projects with a directory structure layed out more or less like this: simpleproj app www admin demo lib model orm view model user blah ... storeproj app www about mobile fbapp lib model orm view model user message cart product merchant Each directory in "lib" contains a separate project, either created in-house or forked, all of which use git for source control. So I figured I should make them submodules of my projects, right? Well, we've been moving toward eclipse + egit, because some of our windows guys not used to a CLI need something they can use without being scared of screwing things up. Anyway, the problem is, egit doesn't support submodules. So, my solution has been a rather crude one involving symlinks... lets say my directory structure on my dev box is generally layed out like this: ~/projects/ bigproj .git app lib model (- ~/lib/model/src/) orm (- ~/lib/orm/src/) neatproj .git app lib view (- ~/lib/view/src/) oldproj .git app lib orm (- ~/lib/orm/src/) ~/lib/ model .git src README.md orm .git src COPYING view .git src ...the symlinks link to a subdirectory of the directory containing the git repo, so eclipse doesn't get confused, and everything sort of works. On my machine, I can update the libs from anywhere and all projects will be updated (needing to be committed again of course). Each project stores a separate copy of the contents of the symlinked directories within "lib" -- but only when staged from within eclipse. After committing from eclipse and moving back to the CLI, git sees that a bunch of files have been removed and a few symlinks have been created. Of course this is acceptable also, probably more so than keeping a separate history of the libs for each project... but eclipse and CLI git obviously need to be on the same page so tons of files aren't vanishing and reappearing. So this brings me to my question. I'd like to know how to either: get eclipse+egit to see the symlinks as symlinks if git will somehow handle them properly*, or get the CLI git to treat them as non-symlinks. Or, if there's a better way to do this, I'm all ears. Hope this all made sense! :D Note: tried to tag this as git-submodules, but was not allowed :( * should I make them relative or absolute? Either way it's a mess. Also will symlinks will work on windows? i know there's something similar but you need a 3rd party tool to manage them AFAIK, i doubt these would translate well.

    Read the article

  • Strange difference in bash behavior across systems

    - by pinkie_d_pie_0228
    I have two systems, an Ubuntu computer and an Android tablet. I have built and configured bash for Android to be used in adb, so it's the same version as my Ubuntu bash, and they use mostly the same bashrc and configuration, and the same exact options set by shopt. However, there is a slight difference in that the Android bash behaves as I expect when I I try to tab-complete something using a variable in it, but the Ubuntu bash doesn't. #Android ls $HOME/loc<tab> => ls $HOME/local #As expected Basically, the variable is taken into account when completing. But then #Ubuntu ls $HOME/loc<tab> => ls \$HOME/loc #Undesired behavior. The list of options is as follows, and is the same in both builds of bash. autocd:checkwinsize:cmdhist:expand_aliases:extglob:extquote:force_fignore:histappend:interactive_comments:progcomp:promptvars:sourcepath What can be making the Ubuntu version escape the $ instead of using it for completion as in the Android build? What can I do to make both work the same way? Any help will be greatly appreciated.

    Read the article

  • How to interrupt stuck bash tab completion?

    - by codeape
    Case: A Windows share mounted using samba over a flaky VPN connection (sometimes very slow, sometimes it drops) When doing tab-completion on filenames, my bash shell can freeze up if the VPN is slow or dropped when I am attempting the tab completion. Example: $ cp myfile.zip /mnt/winbox-c/Progr<tab> key pressed here Is there a bash key I can press to get bash out of its hung state when something like this happens?

    Read the article

  • How to make BASH try and autocomplete on Enter

    - by swatso33
    I've noticed that for many of the commands I use in bash I have actually learned how many letters of the command I must type before I can press [TAB] to have bash successfully autocomplete the command. For example when opening chromium I dont usually type the whole command but instead type $ chrom[TAB][ENTER] and bash successfully autocompletes the command to chromium before I hit the [ENTER] key. Is there a way to make autocomplete work without having to hit [TAB]? My general thinking is that if I type $ chrom[ENTER] bash could check and see that chrom isnt a valid command, but it would make sense to autocomplete it to chromium since that is the only command that starts with chrom

    Read the article

  • Sourcing a script file in bash before starting an executable

    - by abigagli
    Hi, I'm trying to write a bash script that "wraps" whatever the user wants to invoke (and its parameters) sourcing a fixed file just before actually invoking it. To clarify: I have a "ConfigureMyEnvironment.bash" script that must be sourced before starting certain executables, so I'd like to have a "LaunchInMyEnvironment.bash" script that you can use as in: LaunchInMyEnvironment <whatever_executable_i_want_to_wrap> arg0 arg1 arg2 I tried the following LaunchInMyEnvironment.bash: #!/usr/bin/bash launchee="$@" if [ -e ConfigureMyEnvironment.bash ]; then source ConfigureMyEnvironment.bash; fi exec "$launchee" where I have to use the "launchee" variable to save the $@ var because after executing source, $@ becomes empty. Anyway, this doesn't work and fails as follows: myhost $ LaunchInMyEnvironment my_executable -h myhost $ /home/me/LaunchInMyEnvironment.bash: line 7: /home/bin/my_executable -h: No such file or directory myhost $ /home/me/LaunchInMyEnvironment.bash: line 7: exec: /home/bin/my_executable -h: cannot execute: No such file or directory That is, it seems like the "-h" parameter is being seen as part of the executable filename and not as a parameter... But it doesn't really make sense to me. I tried also to use $* instead of $@, but with no better outcoume. What I'm doing wrong? Andrea.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >