Search Results

Search found 8567 results on 343 pages for 'commands unix'.

Page 59/343 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Persistent retrying resuming downloads with curl

    - by Svish
    I'm on a mac and have a list of files I would like to download from an ftp server. The connection is a bit buggy so I want it to retry and resume if connection is dropped. I know I can do this with wget, but unfortunately Mac OS X doesn't come with wget. I could install it, but to do that (unless I have missed something) I need to install XCode and MacPorts first, which I would like to avoid. Curl is available though it seems, but I don't know how that works or how to use it really. If I have a list of files in a text file (one full path per line, like ftp://user:pass@server/dir/file1) how can I use curl to download all those files? And can I get curl to never give up? Like, retry infinitely and resume downloads where it left off and such?

    Read the article

  • cp -u is illegal on mac. What are the alternatives?

    - by Barnabas Szabolcs
    I have a MacbookPro Lion, and I have tried to archive my files that is tried to copy and overwrite if the source is newer than the destination. I tried the following command cp -u source destination but it says, -u is illegal. I also did not find --update or -u in the man cp. Can you please help, what can I do in this situation? [I have the question moved over here from SO, so feel free to answer it once more. I hope this is the right way of dealing with this]

    Read the article

  • Unable to sunchronize local and remote directories ("set times: Operation not permitted")

    - by Tom Auger
    I'm running into FTP errors using software like NetBeans or WinSCP: whenever I attempt to perform a synchronization or update of files from local -- server I get errors on the client saying "set times: Operation not permitted". This is clearly an issue with the way I've configured my Fedora installation. The user that I'm logging in with cannot touch -t any of these files, though he IS part of a group that has r/w access on the files. I do have root / sudo access to this server. What I would like to know is: a) is it likely that this problem would be solved by allowing my FTP user to "touch -t" these files b) how do I enable a certain user to be able to set timestamps on files without giving them ownership of the files (certain of these files need to be owned by Apache, for instance, so I don't want to chown them). Thanks in advance.

    Read the article

  • how to change the existing printed line in AWK

    - by manimaran
    Hi, when i execute the following line, its prints the words in newline. awk 'BEGIN { print "line one\nline two\nline three" }' like line one line two line three How can i print the info in the same line with flush the existing line. For example, while executing the loop, it should print 'one' then wipe out the line and prints 'two' then wipe out the line and prints 'three' etc. can you please assist me?

    Read the article

  • Transfer disk image to larger/smaller disk

    - by forthrin
    I need to switch the hard drive on a 2006 iMac to a new SSD. I don't have the original installation CDs. I know I can order CDs from Apple, but this costs money. Someone told me it's possible to rip the image of the old drive and transfer to the new drive. If so, does the size of the new drive have to be exactly the same as the old? If not, my questions are: Is it possible to "stretch" the image from 120 MB disk to a 256 MB disk (numbers are examples)? If so, what is the command line for this? Likewise, is it possible to "shrink" an image from a larger disk (eg. 256 MB) to a smaller disk (eg. 120 MB), provided that the actual space used on the disk does not exceed 120 MB? How do you do this on the command line?

    Read the article

  • How can I create a cron job that runs a task every three weeks?

    - by itj
    I have a task that needs to be performed on my project schedule (3 weeks). I'm able to set up cron to do this every week, or (for example) on the 3rd week of every month - but can't find a way to do this every three weeks. I could hack the script to create temporary files (or similar) so it could work out it was the third time it has been run - but this solution smells. Can it be done in a clean way?

    Read the article

  • How to copy a file to a remote server using the command line?

    - by cool_cs
    I am trying to copy a file from my desktop to my remote server using the sudo command. I am doing this from the remote machine since I know the password for this machine and I do not have a password for my local machine. sudo scp donj@localhost:/Desktop/my.cnf user@remotemachine:/app/MySQL/my.cnf This does not work however. I want to overwrite the my.cnf file in the MySQL directory. I tried the su command but I do not have the password to become a super user.

    Read the article

  • Most effective way to change Linux command prompt for all users?

    - by incredimike
    I have several machines and the hostnames are really long.. i.e. companyname-ux-staging-web1.companyname.com. So my prompt looks something like [root@mycompany-ux-staging-web1 ~]# I'd like to shorten that up for all users on all machines with the least amount of work. From what I read I have a couple options, but they all have their drawbacks. I could change the hostname, but that would likely affect applications. Not a great choice. I could alter also $PS1 at login for all users by editing all .bashrc for existing users, and edit /etc/skel/.bashrc for potential new users. That's a lot of work across 10 machines. What's my best option or what have I overlooked?

    Read the article

  • Different behaviour of script locally and over ssh

    - by neorg
    I have a script on a server-A Script-A #!/bin/bash -l echo "script-A.sh" | change-environment.sh When I ssh onto server-A and execute it, it works fine. However, when I ssh user@server-A ./script-A.sh Script-A executes, but throws an undefined variable error in change-environment.sh. change-environment.sh runs in the c shell(I have no control over the script so the method I have used is about the only way I can use it), but everything else is in bash. Had found a similar question at I can run a script locally, but cannot do "ssh HOSTNAME /path/to/script.sh". However, there was no solution to the issue and it was a year old.

    Read the article

  • Solaris NFS: user permissions

    - by cjavapro
    I am very new to NFS. I would like to make sure I am clear. If the NFS server shares a directory rw,, and all the files in the directory are permissions 700 and user/group for those files is root/root,,, On the client you would have to log in as root to see it. Is this correct? I am aware that a non root user on the client could make a direct connection to override this. (as in don't use the mount, just use an NFS client hack.) It really seems like anyone who has access to the client machine should have access to the files and that the client machine should be ignoring permissions. Only the server should handle permissions. Am I correct in my understanding? Is it normal to have this type of layout? Is there a way to ignore the permissions on the client side?

    Read the article

  • What do these abbreviations stand for ?

    - by Luc M
    Some directories are easy to understand the meaning /usr /bin ... But for the next ones, I have no idea. /etc /opt opt for optionnal ? etc for electronic t...... configuration (no idea for t) I would like to know what these abbreviations are meaning

    Read the article

  • Kill all currently running cron jobs

    - by Adelphia
    For some reason my cron job scripts aren't exiting cleanly and they're backing up my server. There are currently a couple hundred processes running for one of my users. I can use the following command to kill all processes by that user, but how can I simplify this to kill only crons? pgrep -U username | while read id ; do kill -6 $id ; done It would be dangerous to run the above command as is, correct? Wouldn't that kill mysql and other important things?

    Read the article

  • sudoers entries

    - by Pochi
    Is there a way to have a sudoers entry that allows executing of only a particular command, without any extra arguments? I can't seem to find a resource that describes how command matching works with sudoers. Say I want to grant sudo for /path/to/executable arg. Does an entry like the following: user ALL=(ALL) /path/to/executable arg strictly allow sudo access to a command exactly matching that? That is, it doesn't grant user sudo privileges for /path/to/executable arg arg2?

    Read the article

  • Revoke directory access for a particular user in Solaris

    - by permissiontomars
    I have a need to allow directory access to a particular user on my file system. I want this user to be unable to access any other directory in my file system (initially anyway. It may need access to some directories later). For example: I have a directory called /opt/mydir. - I want my dedicated user to only be able to access this directory, and nothing else. - I want all other users to be able to access this directory as normal. I'm new to Linux and its permissions. I've read a fair bit of background material but I'm a little confused. Is there anyway to revoke permissions to /opt/mydir for a single dedicated user? A possible flawed method would be to only allow access to /opt/mydir and exclude every other user. This won't work because I want all other users to work as normal; accessing the directory. I'm working on Solaris 10. Any suggestions are appreciated.

    Read the article

  • What do the "ALL"s in the line " %admin ALL=(ALL) ALL " in Ubuntu's /etc/sudoers file stand for?

    - by sri
    What does each ALL mean? I understand that the whole line indicates that the admin group members get admininstartive privileges, but would like to know more info about the position of the ALLS and if they each refer to a different set of permissions or something like that? $sudo cat /etc/sudoers ... # User privilege Information root ALL=(ALL) ALL #... %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d #Members of the admin group may gain root privileges %admin ALL=(ALL) ALL # If it matters: OS: Ubuntu : 10.4

    Read the article

  • logrotation with ldap

    - by user1663896
    I need to setup ldap logging with logrotate but I heard there are issues with ldap and syslog concerning log rotation. Here is my logrotate config file for ldap, please take a look to see if it's properly configured: /var/log/openldap.log { size 1k ifempty rotate 4 compress sharedscripts missingok olddir /var/log/old_ldap_logs postrotate /etc/init.d/slapd restart endscript }

    Read the article

  • how to export VARs from a subshell to a parent shell?

    - by webwesen
    I have a Korn shell script #!/bin/ksh # set the right ENV case $INPUT in abc) export BIN=${ABC_BIN} ;; def) export BIN=${DEF_BIN} ;; *) export BIN=${BASE_BIN} ;; esac # exit 0 <- bad idea for sourcing the file now these VARs are export'ed only in a subshell, but I want them to be set in my parent shell as well, so when I am at the prompt those vars are still set correctly. I know about . .myscript.sh but is there a way to do it without 'sourcing'? as my users often forget to 'source'. EDIT1: removing the "exit 0" part - this was just me typing without thinking first EDIT2: to add more detail on why do i need this: my developers write code for (for simplicity sake) 2 apps : ABC & DEF. every app is run in production by separate users usrabc and usrdef, hence have setup their $BIN, $CFG, $ORA_HOME, whatever - specific to their apps. so ABC's $BIN = /opt/abc/bin # $ABC_BIN in the above script DEF's $BIN = /opt/def/bin # $DEF_BIN etc. now, on the dev box developers can develop both ABC and DEF at the same time under their own user account 'justin_case', and I make them source the file (above) so that they can switch their ENV var settings back and forth. ($BIN should point to $ABC_BIN at one time and then I need to switch to $BIN=$DEF_BIN) now, the script should also create new sandboxes for parallel development of the same app, etc. this makes me to do it interactively, asking for sandbox name, etc. /home/justin_case/sandbox_abc_beta2 /home/justin_case/sandbox_abc_r1 /home/justin_case/sandbox_def_r1 the other option i have considered is writing aliases and add them to every users' profile alias 'setup_env=. .myscript.sh' and run it with setup_env parameter1 ... parameterX this makes more sense to me now

    Read the article

  • rsync filtering

    - by biomed
    I use an rsync command to sync two directories remote local the command is (used in python script) os.system('rsync --verbose --progress --stats --recursive\ --copy-links --times --include="*/" --include="*good_name*.good_ext*"\ --exclude-from "/myhome/mydir/src/rsync.exclude"\ %s %s'%(remotepath,localpath)) I want to exclude certain directories that has the same files that I also want to include. I want to include recursively any_dir_name/any_file_name.good but I want to exclude any and all files that are in bad_dir_name/ I used exclude-from and here is my exclude from file * /*.bad_dir_name/ Unfortunately it doesn't work. I suspect it may have something to do with --include="*/" but if I remove it the command doesn't sync any files at all. Thanks for the help.

    Read the article

  • Two IP ranges on eth1 configuration for centos 6.2

    - by Trickzzz
    i have a dedicated server, with "Virtuozzo" on it running VPS's. I have: eth0 - which is configured to the internal network, that one is fine. Now I have: eth1 - which has two ranges routed through this device. x.x.134.x (which has 12 IP's sequentially) x.x.132.x (which has 5) eth1: DEVICE="eth1" HWADDR="00:25:90:37:65:67" NM_CONTROLLED="yes" ONBOOT="yes" IPADDR="x.x.134.x" NETMASK="255.255.255.240" GATEWAY="x.x.134.x" I tried using this with another file as well named "ifcfg-eth1:1" in /etc/sysconfig/network-scripts/ any ideas why the containers on eth1:1 would not link up to the network? Virtuozzo also thinks that eth1:1 is the primary network now, which isn't right?

    Read the article

  • Are there any OpenGL implementations which can use a server to do the rendering?

    - by user1973386
    Assume I have 2 independent machines, one running Debian sid, and the other running Windows 7. The one running Debian sid has a decent graphics card, the Windows 7 machine has no graphics card and a weak processor. The two are connected over a fast local network. Are there any OpenGL implementations, where Windows 7 would use the Debian machine's graphics card to do OpenGL rendering "over the network"?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >