Search Results

Search found 2397 results on 96 pages for 'copying'.

Page 48/96 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Script to gather all the files ending in .log and create a tar.gz file.

    - by Oscar Reyes
    I'm currently using this script line to find all the log files from a given directory structure and copy them to another directy where I can easily compress them. find . -name "*.log" -exec cp \{\} /tmp/allLogs/ \; The problem I have, is, the directory/subdirectory information gets lost because, I'm copying only the file. For instance I have: ./product/install/install.log ./product/execution/daily.log ./other/conf/blah.log And I end up with: /tmp/allLogs/install.log /tmp/allLogs/daily.log /tmp/allLogs/blah.log And I would like to have: /tmp/allLogs/product/install/install.log /tmp/allLogs/product/execution/daily.log /tmp/allLogs/other/conf/blah.log

    Read the article

  • Is Joerg Schilling's "sdd" a full replacement for "dd"

    - by fishtoprecords
    I'm trying to use 'sdd' on my Debian system, and can't get one set of options to work. They do work in 'dd' so I am wondering if I am specifying them incorrectly, or if sdd didn't implement them, or something else. What I want to do is sdd if=/dev/hdh1 of=/bay5/imagebay1 bs=4096 conv=sync,noerror if I leave out the "conv=..." option, it works, or at least starts copying data. sdd if=/dev/hdh1 of=/bay5/imagebay1 bs=4096 Can you shed a bit of light?

    Read the article

  • Writing to external drive runs out of space prematurely

    - by steve
    I have a USB 2.0, 500 GB HDD. I am writing a bunch of data to it, that I previously recovered from the drive. I have formatted the drive in exFAT, since the drive will be used with Windows and OSX. At first, I tried using Windows explorer to move the files over to the drive (about 160 GiB worth) but after copying about 30% of the data (according to TeraCopy), Windows Explorer reported the drive as out of space, and that it was completely full. WinDirStat only showed the size of data that had been copied over... Where did this extra space go? Why is there a 300+ GiB discrepancy between the usage reported by the files and what Explorer sees?

    Read the article

  • Remote file copy util (like rsync) but that will take account of data already copied (in this sessio

    - by Rory McCann
    Let's say I have a directory with 2 files, both are identical and quite large (e.g. 2GB ea.) I want to rsync that directory to a remote host. As I understand it (and I could be wrong), rsync calculates checksums of files. Surely if it sees 2 files with the same checksum it can just copy the first file, then do a local copy on the remote host for the 2nd file? That would make it faster, no? On a similar note, doesn't rsync hash all the remote files before copying? If it saw a different file with the same hash as a file that was to transfered, it could do a local copy on the remote host. Does rsync support this sort of thing? Is there some way to turn it on? Is there a tool similar to rsync that will do this sort of 'hash based' local copies?

    Read the article

  • NTFS write speed really slow (<15MB/s)

    - by Zulakis
    I got a new Seagate 4TB harddrive formatted with ntfs using parted /dev/sda > mklabel gpt > mkpart pri 1 -1 mkfs.ntfs /dev/sda1 When copying files or testing writespeed with dd, the max writespeed I can get is about 12MB/s. The harddrive should be capable of atleast 100MB/s. top shows high cpu usage for the mount.ntfs process. The system has a AMD dualcore. This is the output of parted /dev/sda unit s print: Model: ATA ST4000DM000-1F21 (scsi) Disk /dev/sda: 7814037168s Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 2048s 7814035455s 7814033408s pri The used kernel is 3.5.0-23-generic. The ntfs-3g versions I tried are ntfs-3g 2012.1.15AR.1 (ubuntu 12.04 default) and the newest version ntfs-3g 2013.1.13AR.2. When formatted with ext4 I get good write speeds with about 140MB/s. How can I fix the writespeed?

    Read the article

  • Is Windows Server 2003 on 96 MB possible?

    - by Nifle
    I have an old laptop, a Pentium II with 96 MB. I have had Windows 2000 on it for ages, it was slow but usable. But now I have to upgrade since I can't get my USB-wlan drivers to install (the old PCMCIA network card broke). I would prefer to install Windows XP but I have no spare licence, but I do have a Windows Server 2003 licence. Do you think it's possible (and usable) to squeeze in 2003 on this computer? Edit: Unfortunately 2003 simply refuses to install on the laptop. It hangs with an error message (paraphrased) 2003 has detected a problem with your computer and has halted the installation to prevent damage. And then some error codes This happens very early in the installation while it's copying the installation files just after I accepted the licence. So I give up for now.

    Read the article

  • Preventing SSH RSA host key warnings for change of key vs IP address

    - by Adam M-W
    I have a network with DHCP enabled, and also a computer that dual boots operating systems and has different SSH keys on each (and yes, I would like to keep different keys on each rather than copying the same identity/private key to each). Because the IP address does not change between operating systems because the MAC address is the same, when connecting to ssh, even when not using the IP address but the hostname via DNS/mDNS, I get the warning: Warning: the RSA host key for 'hostname' differs from the key for the IP address '192.168.1.172' Offending key for IP in /Users/user/.ssh/known_hosts:37 Matching host key in /Users/user/.ssh/known_hosts:38 Are you sure you want to continue connecting (yes/no)? How can I surpress the warning when the hostname differs from the IP address for that hostname, but retain the ability to check host keys are the same for each hostname? (each OS has a unique hostname)

    Read the article

  • How can I copy a SQL Azure Database to a server on a different subscription?

    - by Tragedian
    I'm trying to create a copy of a SQL Azure database. The source and destination servers are associated with two different subscriptions, but they are located in the same data-centre. I've been reading Copying Databases in Windows SQL Azure Database and How to: Copy Your Databases (Windows Azure SQL Database) for instructions on this, but I'm not sure if my scenario is covered. I would like to use the CREATE DATABASE Database1B AS COPY OF Database1A; command, but I don't know what the implications are on the accounts used, or what I need to set up between the two databases before this command is possible. Has anybody achieved this type of copy and can elaborate?

    Read the article

  • Clipboard replacements (Ditto, ClipX) stopped working in Windows 7

    - by Bassam
    Over the past few days, I've noticed that the clipboard replacement application I use (Ditto) isn't working any more. Specifically, it isn't copying items to its list. It still shows the history of items copies before, but doesn't add any new items. I am still able to paste past items, so the program is still functional. It will work for a while after quitting and restarting the program, but then it will soon stop getting new items again. I've tried using ClipX, another clipboard replacement app, and that doesn't get new items either. This leads me to think this is a Windows problem. I'm on Windows 7, 64-bit. Is anyone else having this problem? Any ideas on what might be causing it? Update: I've found that if I Disconnect from the clipboard, then Connect to clipboard again, then it works for a while, but stops collecting items again after 15 mins.

    Read the article

  • Why not install Msvcr71.dll into system32?

    - by hillu
    While looking for an authoritative source for the missing Msvcr71.dll that is needed by a few old applications, I stumbled across the MSDN article Redistribution of the shared C runtime component in Visual C++. The advice given to developers is to drop the DLL into the application's directory instead of system32 since DLLs in this directory are considered before the system paths. What can/will go wrong if I (as an administrator, not a developer) decide to take the lazy path and install Msvcr71.dll (and Msvcp71.dll while I'm at it) into the system32 directory (of 32 bit Windows XP or Windows 7 systems) instead of putting a copy in each application's directory? Is there another good solution to provide the applications with the needed DLLs that doesn't involve copying stuff to the application directories? added after first answers: I understand that incompatible API changes may have been made to the mentioned DLLs, but pretty much every mention of incompatibilities I have found using Google had to do with games or video codecs. Right now, I expect that the risk of breakage is pretty small. Am I missing something?

    Read the article

  • Copy large files to multiple machines on a LAN

    - by Jonathan Callen
    I have a few large files that I need to copy from one Linux machine to about 20 other Linux machines, all on the same LAN as quickly as is feasible. What tools/methods would be best for copying these files, noting that this is not going to be a one-time copy. These machines will never be connected to the Internet, and security is not an issue. Update: The reason for my asking this is because (as I understand it) we are currently using scp in serial to copy the files to each of the machines and I have been informed that this is "too slow" and a faster alternative is being sought. According to what I have been told, attempting to parallelize the scp calls simply slows it down further due to hard drive seeks.

    Read the article

  • Can I fork a copy command on ReadyNAS SSH?

    - by DanyW
    I have a ReadyNAS 102 with a couple of USB drives attached. There were times I wanted to copy files between volumes. Unfortunately I have also accidentally cut off copying process by accidentally closing off the SSH sessions. Is it possible for me to fork a cp or mv process on SSH? As it currently stands when I close the SSH session, be it by accidentally closing the terminal window or closing my laptop screen and putting it to sleep, the copy process stops. Can I do something like cp ~/blah /some/other/path & and have the process keep running to completion in the background even if the SSH session is terminated?

    Read the article

  • compressing dd backup on the fly

    - by Phil
    Maybe this will sound like dumb question but the way i'm trying to do it doesn't work. I'm on livecd, drive is unmounted, etc. When i do backup this way sudo dd if=/dev/sda2 of=/media/disk/sda2-backup-10august09.ext3 bs=64k ...normally it would work but i don't have enough space on external hd i'm copying to (it ALMOST fits into it). So I wanted to compress this way sudo dd if=/dev/sda2 | gzip > /media/disk/sda2-backup-10august09.gz ...but i got permissions denied. I don't understand.

    Read the article

  • Excluding certain file types in wget

    - by Alan Spark
    I have been using wget for a while now to mirror files from an ftp server to a local folder. My wget command is as follows: wget -mirror -w 1 -p -nH -P /var/www/ ftp://my-ftp-server However, I just noticed that it is copying over a .listing file for every folder that it visits. So, even if nothing has been changed on the ftp server, a .listing file will always be copied. My understanding is that the .listing file is created when wget opens the ftp session. Is there a way to avoid this? I've tried the -R option (e.g. -R .listing) but this didn't help. See: http://www.gnu.org/software/wget/manual/wget.html#Recursive-Accept_002fReject-Options Thanks, Alan

    Read the article

  • Batch edit (not rename) file properties in windows

    - by Jay
    I have a large directory of downloaded shareware. I keep track of what i have by individually editing the properties of each program. However, some of the programs are multipart .rar types. And i have at least a few hundred programs so far. I am looking for a utility that will let me batch edit file properties such as Title, Author, Summary, and Comments, so I don't have to edit each file or file part individually. Windows doesn't let me do this in Explorer. Powerdesk has a proprietary system, but it isn't preserved when moving or copying files. Any Suggestions?

    Read the article

  • Create samba shortcut from command line

    - by neurolysis
    I'm currently wanting to deploy a setup to around 300 Macs. I have most of it scripted, but I'm having trouble creating a working alias to a samba share from the command line. I tried copying it from one Mac to another, but it loses its status as an alias, and instead OSX opens it in TextEdit. From a hexdump, it also looks like it has machine-specific information. So, say I wanted to create an alias to 'smb://server/share' on the desktop from the command line, how would I do that? I have also tried a tell in AppleScript, but it complained about the syntax (specifically, too many slashes, seemingly taking about smb://). Thanks.

    Read the article

  • Using ROBOCOPY to MOVE data around, not copy it

    - by Nate Bross
    I have the following powershell script, which executes a few robocopy commands: ROBOCOPY.exe $q3 $q4 /R:5 /W:15 /S /NP /MT:32 /XA:SH /XJD ROBOCOPY.exe $q2 $q3 /R:5 /W:15 /S /NP /MT:32 /XA:SH /XJD ROBOCOPY.exe $q1 $q2 /R:5 /W:15 /S /NP /MT:32 /XA:SH /XJD ROBOCOPY.exe $src $q1 /R:5 /W:15 /S /NP /MT:32 /XA:SH /XJD This works fine, but it takes a really long time, I'm wondering, if there is a way that I can have robocopy do a "cut + paste" instead of a "copy + paste" so windows will move the NTFS pointer to the file, instead of actually copying all of the bits of each file?

    Read the article

  • Slow write speeds on new Gigabit home file server

    - by Ryan Holder
    So I finally got all my parts delivered to setup a home file/backup server this week. It's currently running Ubuntu Server and I'm using Samba to share files on my network. The server currently has a 2TB WD Green drive in it connected to a Asus M5A78L-M This is then connected via CAT6a to my new Gigabit switch (TP-Link TL-SG1005D). My home desktop is then also connected to this switch and again also through CAT6a cable. Currently when transfering files I will get a perfect 100MB/s read from the server to my Windows machine. When copying from my Windows machine to the server I get around 30/38MB/s. I know this drive is capable is faster speeds so would anybody have an idea of where the bottleneck is? Any help would be greatly appreciated :) EDIT: I have found ftp's write speed is much closer to what my Samba read speed is so I'm going to give it a guess that is a software problem rather than hardware

    Read the article

  • Pasting into Vista cmd.exe broke, why?

    - by Michel de Ruiter
    I use Windows Vista x64 and regularly use the command line window CMD.EXE. I have enabled QuickEdit Mode (and Insert Mode and AutoComplete), to be able to quickly copy and paste text. Copying (select block, Enter) works fine. Pasting (right click) text also works, as long as it has been copied inside a CMD.EXE window. When I copied the text somewhere else (in an editor, browser or whatever) however, pasting into CMD.EXE does not work! :-( Using the menu to Edit, Paste does not do anything either, so it's not a mouse thing. I also tried elevating CMD.EXE. I can copy/paste freely between CMD.EXE instances of all sorts: elevated/normal, x64/x86... I'm sure it did work on this machine until relatively recently. What could have happened? Some Windows Update perhaps? The problem has been reported by others, but without a solution.

    Read the article

  • Does Windows Move command delete the file only on successful completion?

    - by IronicMuffin
    This may be a stupid question, but I'm erring on the side of caution here. If I'm using Windows command line/batch files to Move a file from one server to another and we have a network failure, what will happen to the original file? I would assume it remains untouched until fully moved, and then deleted, but I need to be sure. My fear is that it deletes bytes as they are moved, which would be bad. If that isn't the case, is there a better way than Copying the file and Deleting after the copy completes? Thanks for your help. EDIT: I suppose super user would have been better. This is part of a job kicked off by code, so my first thought was to come here.

    Read the article

  • What's a fast way to copy a lot of files from an internal hard-drive to external (USB) storage?

    - by jonathanconway
    I have a large amount of data - about 500 GB - on the internal hard drive of a desktop PC. This includes music, videos, PDFs... you name it. I want to copy everything to an external USB hard drive (1.5 tb capacity). The desktop PC runs Ubuntu. To being with, I simply plugged in and mounted the hard drive and dragged the top-level folder onto the drive. It's started copying, but it seems to be proceeding very slowly. About 10 minutes later and it's only done about 500 MB. I'm sure this is slower than what I could achieve with less total data. So I'm wondering if there's a quicker way of doing this. Would it be better to copy it in portions of 500MB or so, rather than all at once?

    Read the article

  • Looking for Primos "name generation" code

    - by Greg E
    Anyone remember Primos ? It had a shell-level thing called "name generation" which was very useful. Eg. to rename a bunch of files from part1.suffix to part1.new.suffix2 you could say rename *.suffix =.+new.suffix2 That's a very simple example, it was quite powerful. The control characters were: =,==,^=,^==,+ Which meant approximately: match 1 filename component, match all remaining components, delete one component, delete all remaining, add a component. In conjunction with Primos wildcards you could do pretty much any useful file renaming/copying operation very conveniently. It was much better than Unix wildcards and name generation/iteration and I'd like to find it again and use it. Anyone seen it around ? Not much reference on the interweb: search "Primos name generation" and you get a few fragmentary hits. Thanks !

    Read the article

  • How to change subversion working copy UUID?

    - by Ioan
    I've recently updated Subversion repositories from an old 1.2.3 version to 1.6.0 via svnadmin dump/load. The old repositories all used the same UUID (repositories were created using by copying a template repository). I've changed the UUID on a couple of the new repositories via svnadmin setuuid to be unique. I can't just relocate my existing working copies of those repositories because the UUIDs are different. I know about exporting the working copy and checking out from the new repository, but I was wondering whether there was a way to just change the UUID of the working copy in-place, like what svnadmin setuuid does for repositories.

    Read the article

  • 2010 MBP HD speed sanity check?

    - by hvgotcodes
    I have a 2010 MBP with the 7200 rpm hard drive. I was copying a 2.1GB file, and noticed read/write speeds of around 20MB/s. Is that reasonable? Seems slow to me.... What is the proper way to benchmark a HD on OS X? Googling I see xbench, but that hasn't been updated in years. I also see some guides for using the command line. The goal would be to benchmark my drive and then compare the results to some official scores that the drive should be getting.

    Read the article

  • How do you live-migrate Hyper-V to Azure?

    - by TopHat
    I have a new install of Windows Server 2012 with the Hyper-V role setup and a couple VMs running along fat, dumb, and happy. I want to play with Azure hosting for VMs for a couple of stand-alone boxes. Is there anything special that I need to wire up to be able to live-migrate to Azure? I have the 90-day Azure trial account right now. Any special plumbing required? I have not found a lot of documentation about this yet. Everything I found points to manually copying the VHDs via command line and the Azure 2012 SDK.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >