Search Results

Search found 8567 results on 343 pages for 'commands unix'.

Page 51/343 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Calculate disk space occupied by many .png files

    - by Alexander Farber
    I have 357 .png files located in different sub dirs of the current dir: settings# find . -name \*.png |wc -l 357 settings# find . -name \*.png | head ./assets/authenticationIcons/audio.png ./assets/authenticationIcons/bbid.png ./assets/authenticationIcons/camera.png ./bin/icons/ca_video_chat.png ./bin/icons/ca_voice_control.png ./bin/icons/ca_vpn.png ./bin/icons/ca_wifi.png Is there a oneliner to calculate the total disk space occupied by them (before I pngcrush them)? I've tried (unsuccessfully): settings# find . -name \*.png | xargs du -s 4 ./assets/support/wifi_locked_icon_white.png 1 ./assets/support/wifi_vpn_icon_connected.png 1 ./assets/support/wi_fi.png 1 ./assets/support/wi_fi_conected.png 8 ./bin/blackberry-tablet-icon.png 2 ./bin/icons/ca_about.png 2 ./bin/icons/ca_accessibility.png 2 ./bin/icons/ca_accounts.png 2 ./bin/icons/ca_airplane_mode.png 2 ./bin/icons/ca_application_permissions.png 1 ./bin/icons/ca_balance.png

    Read the article

  • Getting rsync to move file from source to destination ?

    - by fabien-barbier
    Is rsync is a good choice for my project ? I have to : - copy files from source to destination folder via SSH, - be sure all files are copied, - delete source files after copy. - if I have conflict name, I have to rename files. It looks like I can use option : --remove-source-files (to delete source files) But how rsync manage conflict, can I had rules ? Use case on my project : I run scientific calculation on server A and results are inserted in folder "process", for each calculation I have a repository like this : /process/calc1. Now I would like to transfer repository "/calc1" to server B (I get /process/calc1), and delete "calc1" from server A. ...During another calculation I get "/process/calc2" on server A, the idea is also to move "calc2" in "/process/" directory on server B, then I have now on server B : - /process/calc1 - /process/calc2 (and /process/ on server A is empty). How rsync will manage conflict (on server B) if I have another folder like "/process/calc1" in server A after a new calculation (if "/process/calc1" already exist on server B) ? Is it possible to add rules with rsync, and rename "/process/calc1" by "process/calc1R2" in server B ? And so on (ex:calc1R3) ? Thanks.

    Read the article

  • If I exchange the CPU, must I reinstall the OSes? (swapping cpus from one *nix-like to another)

    - by dag729
    Hi, as suggested by the title, I want to change CPU: actually I have two computers, one with Ubuntu running on an AMD Athlon 64 dual core 5200+ and the other with FreeBSD running on an AMD Sempron single core LE-1250. I would like to swap (I am not sure that this is the correct term...) the CPUs from one computer to the other one, that is take the dual core from the ubuntu pc and put it inside the freebsd pc and viceversa. The mobo is the same. Do you think I will encounter problems?

    Read the article

  • readlink: illegal option -- f

    - by Scott
    Recently the script was working fine, but from some days I'm receiving such message, while running the readlink -f "$0" command: readlink: illegal option -- f usage: readlink [-n] [file ...] I was running the following code to debug: #!/bin/sh DIR=`pwd` RLPATH=`which readlink` RLOUT=`readlink -f -- "${0}"` DIROUT=`dirname -- ${RLOUT}` echo "dir: ${DIR}" echo "path: ${PATH}" echo "path to readlink: ${RLPATH}" echo "readlink output: ${RLOUT}" echo "dirname output: ${DIROUT}" Output: # ./debug.sh readlink: illegal option -- f usage: readlink [-n] [file ...] usage: dirname string [...] dir: /home/svr path: /sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin:/root/bin path to readlink: /usr/bin/readlink readlink output: dirname output: What is wrong ?

    Read the article

  • When does /tmp get cleared?

    - by John Lawrence Aspden
    I'm taking to putting various files in /tmp, and I wondered what the rules on deleting them are? I'm imagining it's different for different distributions, and I'm particularly interested in Ubuntu and Fedora desktop versions. But a nice general way of finding out would be a great thing. Even better would be a nice general way of controlling it! (Something like 'every day at 3 in the morning, delete any /tmp files older than 60 days, but don't clear the directory on reboot')

    Read the article

  • Running rsync on network connect

    I have one mac which is always on and is my main computer. I also have a MacBook and I'm trying to Sync my iphoto library. So I can successfully use rsync to sync files. I'm using a cron to have it run once a day. In reality the macbook isn't always on, so I'm looking for a way to run rsync when ever the two computers are connected on the same wifi network. So I'm guessing the best place is to somehow run rsync when the airport is connected. Whats the best way

    Read the article

  • Running rsync on network connect

    - by user40495
    I have one mac which is always on and is my main computer. I also have a MacBook and I'm trying to Sync my iphoto library. So I can successfully use rsync to sync files. I'm using a cron to have it run once a day. In reality the macbook isn't always on, so I'm looking for a way to run rsync when ever the two computers are connected on the same wifi network. So I'm guessing the best place is to somehow run rsync when the airport is connected. Whats the best way

    Read the article

  • Recovering text files in terminal using grep on Mac OS X Snow Leopard

    - by littlejim84
    I foolishly removed some source code from my Mac OS X Snow Leopard machine with rm -rf when doing something with buildout. I want to try and recover these files again. I haven't touched the system since to try and seek an answer. I found this article and it seems like the grep method is the way to go, but when running it on my machine I'm getting 'Resource busy' when trying to run it on the disk. I'm using this command: sudo grep -a -B1000 -A1000 'video_output' /dev/disk0s2 > file.txt Where 'dev/disk0s2' is what came up when I ran df. I get this when running: grep: /dev/disk0s2: Resource busy I'm not an expert with this stuff, I'm trying my best. Please can anyone help me further? I'm on the verge of losing two days of source code work! Thank you

    Read the article

  • How can i use the `eject` command on a computer i have SSH'd into?

    - by will
    So if i do eject on my machine, it works exactly as expected, however, if i ssh into the machine next to me, and do the same thing, it does not work... my computer: eject: using default device `cdrom' eject: device name is `cdrom' eject: expanded name is `/dev/cdrom' eject: `/dev/cdrom' is a link to `/dev/sr0' eject: `/dev/sr0' is not mounted eject: `/dev/sr0' is not a mount point eject: checking if device "/dev/sr0" has a removable or hotpluggable flag eject: `/dev/sr0' is not a multipartition device eject: trying to eject `/dev/sr0' using CD-ROM eject command eject: CD-ROM eject command succeeded other computer: eject: using default device `cdrom' eject: device name is `cdrom' eject: expanded name is `/dev/cdrom' eject: `/dev/cdrom' is a link to `/dev/sr0' eject: `/dev/sr0' is not mounted eject: `/dev/sr0' is not a mount point eject: checking if device "/dev/sr0" has a removable or hotpluggable flag eject: `/dev/sr0' is not a multipartition device eject: unable to open `/dev/sr0' if i look in the /dev/ dir, then i find cdrom which is a symlink to sr0 - as mentioned by the verbose outputs of eject -v. On my machine, if i try and look at it, if the drive is open, it will close it, and then give this: $ less sr0 sr0 is not a regular file (use -f to see it) so $ less -f sr0 sr0: No medium found but if i do it on the other computer, $ less -f sr0 sr0: Permission denied so i look at the files more, and get this on both machines: $ ls -la sr0 brw-rw----+ 1 root cdrom 11, 0 Nov 12 10:13 sr0 Does anyone know a way around this? I do not have root access.

    Read the article

  • Clarification on signals (sighup), jobs, and the controlling terminal

    - by asolberg
    So I've read two different perspectives and I'm trying to figure out which one is right. 1) Some sources online say that signals sent from the controlling terminal are ONLY sent to the foreground process group. That means if want a process to continue running in the background when you logout it is sufficient to simply suspend the job (ctrl-Z) and resume it in the background (bg). Then you can log out and it will continue to run because SIGHUP is only sent to the foreground job. See: http://blog.nelhage.com/2010/01/a-brief-introduction-to-termios-signaling-and-job-control/ ...In addition, if any signal-generating character is read by a terminal, it generates the appropriate signal to the foreground process group.... 2) Other sources claim you need to use the "nohup" command at the time the program is executed, or failing that, issue a "disown" command during execution to remove it from the jobs table that listens for SIGHUP. They say if you don't do this when you logout your process will also exit even if its running in a background process group. For example: http://docstore.mik.ua/orelly/unix3/upt/ch23_11.htm ...If I log out anyway, the shell sends my background job a HUP signal... In my own experiments with Ubuntu linux it seems like 1) is correct. I executed a command: "sleep 20 &" then logged out, logged back in and pressed did a "ps aux". Sure enough the sleep command was still running. So then why is it that so many people seem to believe number 2? And if all you have to do is place a job in the background to keep it running why do so many people use "nohup" and "disown?"

    Read the article

  • Can I nohup/screen an already-started process?

    - by ojrac
    I'm doing some test-runs of long-running data migration scripts, over SSH. Let's say I start running a script around 4 PM; now, 6 PM rolls around, and I'm cursing myself for not doing this all in screen. Is there any way to "retroactively" nohup a process, or do I need to leave my computer online all night? If it's not possible to attach screen to/nohup a process I've already started, then why? Something to do with how parent/child proceses interact? (I won't accept a "no" answer that doesn't at least address the question of 'why' -- sorry ;) )

    Read the article

  • grep - what arguments do you usually specify?

    - by meder
    My most common grep line is just.. grep -IRl "text" * However I'm kinda getting tired of retyping this over and over - is there some way I can make an alias command so that those arguments are always enabled? And, I was wondering what arguments you usually specify for text searching - my two arguments 'R' for recursion, 'I' for not including binary types like jpg/gif, and 'l' for line number seem a bit too minimal. Which arguments do you use?

    Read the article

  • Mass renaming, *nix version

    - by Paolo B.
    I was looking for a way to rename a huge number of similarly-named files, much like this one (a Windows-related question) except that I'm using *nix (Ubuntu and FreeBSD, separately). Just to sum up, while using the shell (Bash, CSH, etc.) how do I mass-rename a number of files such that, for example, the following files: Beethoven - Fur Elise.mp3 Beethoven - Moonlight Sonata.mp3 Beethoven - Ode to Joy.mp3 Beethoven - Rage Over the Lost Penny.mp3 will be renamed like these? Fur Elise.mp3 Moonlight Sonata.mp3 Ode to Joy.mp3 Rage Over the Lost Penny.mp3 The reason I want to do this is that these collection of files will go under a directory named "Beethoven" (i.e. the filenames' prefix), and having this information on the filename itself will be redundant.

    Read the article

  • Use Apache authentication + authorization to control access to Subversion subdirectories

    - by Stefan Lasiewski
    I have a single SVN repo at /var/svn/ with a few subdirectories. Staff must be able to access the top-level directory and all subdirectories within it, but I want to restrict access to subdirectories using alternate htpasswd files. This works for our Staff. <Location /> DAV svn SVNParentPath /var/svn AuthType Basic AuthBasicProvider ldap # mod_authnz_ldap AuthzLDAPAuthoritative off AuthLDAPURL "ldap.example.org:636/ou=people,ou=Unit,ou=Host,o=ldapsvc,dc=example,dc=org?uid?sub?(objectClass=PosixAccount)" AuthLDAPGroupAttribute memberUid AuthLDAPGroupAttributeIsDN off Require ldap-group cn=staff,ou=PosixGroup,ou=Unit,ou=Host,o=ldapsvc,dc=example,dc=org </Location> Now, I am trying to restrict access to a subdirectory with a separate htpasswd file, like this: <Location /customerA> DAV svn SVNParentPath /var/svn # mod_authn_file AuthType Basic AuthBasicProvider file AuthUserFile /usr/local/etc/apache22/htpasswd.customerA Require user customerA </Location> I can use Firefox and curl to browse to this folder fine: curl https://svn.example.org/customerA/ --user customerA:password But I cannot use check out this SVN repository: $ svn co https://svn.example.org/customerA/ svn: Repository moved permanently to 'https://svn.example.org/customerA/'; please relocate And on the server logs, I get this strange error: # httpd-access.log 192.168.19.13 - - [03/May/2010:16:40:00 -0700] "OPTIONS /customerA HTTP/1.1" 401 401 192.168.19.13 - customerA [03/May/2010:16:40:00 -0700] "OPTIONS /customerA HTTP/1.1" 301 244 # httpd-error.log [Mon May 03 16:40:00 2010] [error] [client 192.168.19.13] Could not fetch resource information. [301, #0] [Mon May 03 16:40:00 2010] [error] [client 192.168.19.13] Requests for a collection must have a trailing slash on the URI. [301, #0] My question: Can I restrict access to Subversion subdirectories using Apache access controls? DocumentRoot is commented out, so it's not clear that the FAQ at http://subversion.apache.org/faq.html#http-301-error applies.

    Read the article

  • Finding custom printer-specific options on *nix

    - by enbuyukfener
    Hey all, I have a printer set up using CUPS (FujiXerox Document Center 1100), it is named DC1100. There are capabilities of the printer that are not shows as options of the printer as listed by: lpoptions -l -d DC1100 The output is below: PrintoutMode/Printout Mode: Draft *Normal High Photo InputSlot/Media Source: Upper Lower MultiPurpose LargeCapacity Manual *Standard PageSize/Page Size: *Letter A4 C5 C6 COM10 DL Executive Legal Monarch Statement PageRegion/PageRegion: Letter A4 C5 C6 COM10 DL Executive Legal Monarch Statement STP_Brightness/Brightness: 0.00 0.02 0.04 [snip] 2.00 STP_Contrast/Contrast: 0.00 0.05 0.10 0.15 [snip] 4.00 STP_ColorCorrection/Color Correction: Accurate Bright Density [snip] Uncorrected STP_DitherAlgorithm/Dither Algorithm: Adaptive EvenTone Fast [snip] VeryFast STP_EnableDensity/Density Enable: *Disabled Enabled STP_Density/Density Value: 0.1 0.2 0.3 0.4 [snip] 8.0 STP_EnableGamma/Composite Gamma Enable: *Disabled Enabled STP_Gamma/Composite Gamma Value: 0.10 0.15 0.20 0.25 [snip] 4.00 STP_LinearContrast/Linear Contrast Adjustment: *False True STP_Duplex/Double-Sided Printing: DuplexNoTumble DuplexTumble *None Resolution/Rendering Resolution: *FromPrintoutMode 150x150dpi 300x300dpi 600x600dpi OutputType/Output Type: *FromPrintoutMode BlackAndWhite Grayscale STP_ImageType/Image Type: *FromPrintoutMode Photo Graphics LineArt None Text TextGraphics STP_Resolution/Resolution: *FromPrintoutMode 150dpi 300dpi 600dpi I am particularly looking for options for: "secure print" (possibly by setting a mode and setting a username) stapling hole punching Perhaps I need a vendor specific driver/PPD file? If so, any pointers as I have no idea where to look for one. I haven't been able to find one on the official site or on sites such as http://www.openprinting.org

    Read the article

  • Substiting a line through PHP in SSH

    - by Asad Moeen
    I've already setup SSH usage in PHP and most of the things work. Now what I want to do is that I'm looking to edit a line in a file and replace it back. It works directly on the server but can't seem to get it working with PHP files. Here is what I'm trying. $new_line1 = 'Line $I want to add - The $I has to go into the file as it is'; $new_line2 = 'Ending $text of the line - $text again goes into file; $query = "Addition to line"; $exec1= 'cd /root; perl -pe "s/.*/' ; $exec2= '/ if $. == 37" Edit.sh > Edited.sh'; $new="$exec1$new_line1$query$new_line2$exec2"; $edit="cd /root/mp; cp Edited.sh Edit.sh"; echo $ssh->exec($new); echo $ssh->exec($edit); Now the thing is that running the perl command directly in SSH works without any errors but when I run this through PHP I get the error: Substitution replacement not terminated at -e line 1. I want to know why would it work this way and not that?

    Read the article

  • ipfw to redirect traffic from port 80 and 443 to 8080

    - by user1048138
    -A PREROUTING -s 10.0.10.0/24 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080 -A PREROUTING -s 10.0.10.0/24 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 8080 -A POSTROUTING -s 10.0.10.0/24 -o eth0 -j MASQUERADE The above code is what I have used on linux to forward my ports to 8080, how can I do the same on a mac? I have tried test_machine:~ root# ipfw show 00666 0 0 fwd 127.0.0.1,8080 tcp from any to me dst-port 80 and its not working! any suggestions?

    Read the article

  • How to debug silent hang on shutdown of Solaris 10?

    - by jblaine
    We're experiencing a mysterious hang on shutdown of a newly-imaged Oracle/Sun Solaris 10 SPARC box. It is repeatable (in the same spot ... from what we can tell). We let it try to work itself out multiple times for 5-10 minutes and it never progressed. I've never seen this happen before. The last thing displayed on the console is that syslogd was sent signal 15. Prior to us disabling snmpdx on the box, the last thing on the console was that snmpdx was sent signal 15 (after syslogd was sent signal 15). While very rare to find, in Solaris days past, I'd have a better idea from experience where the problem might be, and could then narrow things down further with silly (but effective) debugging echo statments in /etc/*.d scripts. With SMF in the picture, I'm not really quite sure where to start. We forced a crash dump via sync at the {ok} prompt for later analysis, and then let the box come up because it's a production server and our scheduled outage window was closing. /var/adm/messages shows nothing of use. How would you debug this situation? mdb ps of the savecore shows the following processes were running at hang time (afsd is the OpenAFS client and that many are expected): > > ::ps S PID PPID PGID SID UID FLAGS ADDR NAME R 0 0 0 0 0 0x00000001 00000000018387c0 sched R 108 0 0 0 0 0x00020001 00000600110fe010 zpool-silmaril-p R 3 0 0 0 0 0x00020001 0000060010b29848 fsflush R 2 0 0 0 0 0x00020001 0000060010b2a468 pageout R 1 0 0 0 0 0x4a024000 0000060010b2b088 init R 1327 1 1327 329 0 0x4a024002 00000600176ab0c0 reboot R 747 1 7 7 0 0x42020001 0000060017f9d0e0 afsd R 749 1 7 7 0 0x42020001 00000600180104d0 afsd R 752 1 7 7 0 0x42020001 0000060017cb44b8 afsd R 754 1 7 7 0 0x42020001 0000060017fc8068 afsd R 756 1 7 7 0 0x42020001 0000060017fcb0e8 afsd R 760 1 7 7 0 0x42020001 00000600177f4048 afsd R 762 1 7 7 0 0x42020001 000006001800f8b0 afsd R 764 1 7 7 0 0x42020001 000006001800ec90 afsd R 378 1 378 378 0 0x42020000 0000060013aee480 inetd R 7 1 7 7 0 0x42020000 0000060010b28008 svc.startd R 329 7 329 329 0 0x4a024000 00000600110ff850 sh Z 317 7 317 317 0 0x4a014002 0000060013b3a490 sac

    Read the article

  • How to download all file content from a folder using wget and http

    - by user1526912
    I am trying to use wget and http to download all contents from folderAA below to directory /root/sstest wget -r --directory-prefix="/root/sstest" -o /root/sstest2.log http://site.com/folder1/folder2/folderAA/ When I submit the above command nothing is downloaded. If I submit a wget request for a specific file from folderAA the file is actually downloaded to /root/sstest: wget -r --directory-prefix="/root/sstest" -o /root/sstest2.log http://site.com/folder1/folder2/folderAA/file.txt Can someone tell me why I cannot download all file content from folderAA at once using the first wget request?

    Read the article

  • Get sudoers information for all users on a server

    - by Jon Kruger
    I want to collect information on what sudoers actions a user can perform on a server. From what I've read, you can do this with the command sudo -l -U username. However, one server that I have has a slightly older version of sudo (1.6.7p5) and the -U option doesn't seem to exist (I don't own the server so I can't just upgrade to a newer version of sudo). Has anyone ever had to collect sudoers information for all users on a server? How would you recommend doing it?

    Read the article

  • How do I clear out the ssh-agent entries (on Mac OS X )?

    - by cwd
    I'm running Mac OS X, and it appears that after SSHing to several machines, using identity files, my 'ssh-agent' builds up a lot of identity / keys and then sometimes offers too many to a remote machine, causing them to kick me off before connecting: Received disconnect from 10.12.10.16: 2: Too many authentication failures for cwd It's pretty obvious what's happening, and this page talks about it in more detail: SSH servers only allow you to attempt to authenticate a certain number of times. Each failed password attempt, each failed pubkey/identity that is offered, etc, take up one of these attempts. If you have a lot of SSH keys in your agent, you may find that an SSH server may kick you out before allowing you to attempt password authentication at all. If this is the case, there are a few different workarounds. Rebooting clears the agent and then everything works OK again. I can also add this line to my .ssh/config file to force it to use password authentication: PreferredAuthentications keyboard-interactive,password Anyhow, I saw the note on the page I referenced talking about deleting keys from the agent, but I'm not sure if that applies on a Mac since they appear to be cleared after reboot anyhow. Is there a simple way to clear out all keys in the 'ssh-agent' (the same thing that happens at reboot)?

    Read the article

  • username/password for ftp

    - by wowrt
    Hi, I tried to download the rpm for vim from ftp://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/vim/vim-minimal-6.3-1.aix5.1.ppc.rpm . I am using "ftp" command in AIX. when I use, open public.dhe.ibm.com It's prompting for username/password. But no information regarding this is given on the linked page. How do I download the rpm using "ftp" command?. Thanks.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >