Search Results

Search found 3203 results on 129 pages for 'transfer'.

Page 20/129 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • Excel Help: Data Input Help

    - by B-Ballerl
    Everyday I download data from a site that will have rows each filled with individual data for clients. I'm able to input the data into excel as a whole but after that I'm having trouble figuring out how to put it into a chart. For example Web visits time. So say Client 1 stayed for 5 min increasing his total time on the site to 20 min and Client 2 stayed for 0 min keeping his time of 10 min and they were both registered on new years eve, and R1's last login was today and R2's was yesterday. (R for some reason repersents Client, no idea why...). Client 3 hasn't been on since he registered keeping his total at 4 min So my data would look something like this for Today (20110104) R1,20101231,20110104,20 R2,20101231,20110103,10 R3,20101231,20101231,4 And this for the day before (201101030), R1,20101231,20110102,15 R2,20101231,20110103,10 R3,20101231,20101231,4 I get about 200+ client rows each day where even the names of the Client list are changing. Is it possible to import the data each day and fill it in a excel sheet where the Client number is off on the left hand side in a table, and the amount of time (Whole Number ex. 4) each day it spends on the site extend to the right under it's specific date see Picture? I've manage to create a manual sheet but have been unsucessful at getting excel to do any of it for me. Here are two pictures:

    Read the article

  • How to fix semaphore timeout period has expired.

    - by Ryan
    I bough a new hard drive and I'm attempting to copy stuff off the old hard drive to the new one. I tried it using Windows Vista and Windows 7 but no go. Then I tried installing some 3rd party updated drivers that were supposed to fix the problem now my Windows Vista disk won't boot. Has anyone had this problem before and how did you fix it?

    Read the article

  • rsync to EC2: Identity file not accessible

    - by Richard
    I'm trying to rsync a file over to my EC2 instance: rsync -Paz --rsh "ssh -i ~/.ssh/myfile.pem" --rsync-path "sudo rsync" file.pdf [email protected]:/home/ubuntu/ This gives the following error message: Warning: Identity file ~/.ssh/myfile.pem not accessible: No such file or directory. [email protected]'s password: The pem file is definitely located at the path ~/.ssh/myfile.pem, though: vi ~/.ssh/myfile.pem shows me the file. If I remove the remote path from the very end of the rsync command: rsync -Paz --rsh "ssh -i ~/.ssh/myfile.pem" --rsync-path "sudo rsync" file.pdf [email protected] Then the command appears to work... building file list ... 1 file to consider file.pdf 41985 100% 8.79MB/s 0:00:00 (xfer#1, to-check=0/1) sent 41795 bytes received 42 bytes 83674.00 bytes/sec total size is 41985 speedup is 1.00 ...but when I go to the remote server, nothing has actually been transferred. What am I doing wrong?

    Read the article

  • How to restore a Windows Easy Transfer file from a 64bit machine to a 32bit machine?

    - by Kevin Davis
    Using a 32bit laptop, I saved my settings etc. using "Windows Easy Transfer" from the Win7 RC. I set the file destination to a Win2K R2 machine that happened to be 64bit. When I re-installed my laptop and tried to restore my settings from the file I'd saved I was surprised to get an error: "Windows Easy Transfer can't transfer files from a 64-bit computer to a 32-bit computer." Is there a known workaround? Ideas on how to unpack the file and get my stuff?

    Read the article

  • linux command prompt ftp to ftp server running on windows

    - by Vass
    Hi, I am using on Windows Vista, Filezilla server. I have it set up to be accessed via outside IPs and when I use a client on the IP I have it connects normally using Filezilla client. On the same machine I have Ubuntu running in a virtual box and when using filezilla client in there it works fine. Now I want to try the command prompt. So I do the ftp xxx.xxx.xx.xx I enter the name and password and i get the ftp command prompt, but the commands are not working properly. when trying "ls" or "cd" these commands do not work. "cd" tells me that the current directory is "/" root, but this does not make sense in the windows operating system. Now the filezilla client is taking the user in the application window directly to the root folder of the permitted filespace granted to that user. How can the same be done from the command prompt, if there is a way? It is as if the command prompt takes me to the root which does not exist or even have correct permissions to move in. Is there any way to be taken to the correct directory directly, or move there especially when the slashes are the wrong way around etc? Best,

    Read the article

  • Criteria strings, how many different criteria can be entered to retrieve specific data?

    - by Janet
    For our membership database we are currently using an old DOS program "Arclist". The program is old but the one feature we desperately need in a database program is to be able to enter multiple criteria at one time for more of a "one time" extraction of the data meeting all the various criteria entered in what I call a "criteria string". An example may be extracting only those records with zip codes matching (67893, 54235, 54323, 54201, 54302, 54303, 54301, 67894, 67895). Another set of criteria might be to omit records, not equal to, one type of criteria in one field and also extract records matching criteria in another field. So we would want records "not equal to" in one field, but whose information equals requested information in another field.

    Read the article

  • Windows 7 can't copy file - Error 0x800700DF: The file size exceeds the limit allowed and cannot be saved

    - by JJGroover
    Any attempt to copy files larger than about 40 MB from a network share (a SAN running open filer / Samba) to my local machine running Windows 7 always results in the following error and the copy fails: Error 0x800700DF: The file size exceeds the limit allowed and cannot be saved. I've tried copying to my C: drive and a USB drive with the same results. Smaller files copy just fine. Clearly 40 MB is not that big of a file so I'm assuming it is some buggy interaction between windows 7 and Samba perhaps. Google has so far turned up nothing. Can anyone point me in the right direction?

    Read the article

  • Is it possible to download extremely large files intelligently or in parts via SSH from Linux to Windows?

    - by Andrew
    I have a ~35 GB file on a remote Linux Ubuntu server. Locally, I am running Windows XP, so I am connecting to the remote Linux server using SSH (specifically, I am using a Windows program called SSH Secure Shell Client version 3.3.2). Although my broadband internet connection is quite good, my download of the large file often fails with a Connection Lost error message. I am not sure, but I think that it fails because perhaps my internet connection goes out for a second or two every several hours. Since the file is so large, downloading it may take 4.5 to 5 hours, and perhaps the internet connection goes out for a second or two during that long time. I think this because I have successfully downloaded files of this size using the same internet connection and the same SSH software on the same computer. In other words, sometimes I get lucky and the download finishes before the internet connection drops for a second. Is there any way that I can download the file in an intelligent way -- whereby the operating system or software "knows" where it left off and can resume from the last point if a break in the internet connection occurs? Perhaps it is possible to download the file in sections? Although I do not know if I can conveniently split my file into multiple files -- I think this would be very difficult, since the file is binary and is not human-readable. As it is now, if the entire ~35 GB file download doesn't finish before the break in the connection, then I have to start the download over and overwrite the ~5-20 GB chunk that was downloaded locally so far. Do you have any advice? Thanks.

    Read the article

  • Replacing a Windows File Server

    - by Keltari
    We have a Windows file server that needs to be replaced. Unfortunately, there are to many custom build applications, shares, and random things that rely on the existence of that file server to just stand up a new one, so we need to make the transition as seamless as possible. We decided to copy the contents of the old server to the new and then rename the new one with the same name. I have made a list of everything I need to do. Am I missing anything? The new file server is up and running I copied the directories of the old file server to the new one I set up all the shares/permissions for those directories in advance Copy the contents from oldserver to newserver robocopy \vash\d$\nasshare\ \vash2\l$\originalvash\nasshare\ /E /ZB /copyall /dcopy:T /FP /x /v /fp /np /mt:8 /eta /log:robocopy.log /tee Rename and reIP oldserver to something else (need it available if something is missing) Rename and reIP newserver to oldserver Obviously, I need to test if shares are working properly. Are there any steps im missing?

    Read the article

  • What hash should be used to ensure file integrity?

    - by Corey Ogburn
    It's no secret that large files offered up for download often are coupled with their MD5 or SHA-1 hash so that after you download you can verify the file's integrity. Are these still the best algorithms to use for this? Obviously these are very popular hashes that potential downloaders would have easy access to. Ignoring that factor, what hashes have the best properties for being used for this? For example, bcrypt would be horrible for this. It's designed to be slow. That would suck to use on your 7.4 GB dual layer OS ISO you just downloaded when a 12 letter password might take up to a second with the right parameters.

    Read the article

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • Copy files with filter (XP)

    - by fire
    I have a huge folder (over 6GB) with multiple sub-folders that I want to copy onto an external hard drive, however I do not want it to copy any PDF, EXE or ZIP files across to save space. Is there any software that will help me achieve this? I have looked at TeraCopy but this doesn't seem to have any filter mechanism on it. I am using Windows XP (* sigh *). *edit: found the xcopy command, will this do it? Can anyone help me with the syntax?

    Read the article

  • Rsync and Lazy mode ?

    - by fabien-barbier
    Since transferring or copying a file that is being used sometimes causes corruption of the transferred file, can we define a time interval in which Rsync checks each file in a given directory to see if there is a change within that time interval ? Files that are not changed during that interval will be transferred, while those that have changes will not. Can I do that with rsync ? Or another tool ? Is there a script to add this functionality to Rsync ? Thanks

    Read the article

  • Rsync over NFS with QoS: How to view real transfer speed?

    - by Ian Mackinnon
    We have a bandwidth limit between a Linux server and a NAS, created using 'tc' with an IP filter. When writing to an NFS mount of the NAS, rsync claims a very high transfer speed for each file and then waits a long time before acknowledging that everything has finished. The total time taken is consistent with the QoS limit and the time taken by the same transfer over FTP. Why does the write to the NFS mount report higher transfer speeds than are actually happening over the network? How can I monitor the actual bandwidth of the transfer?

    Read the article

  • Remote file copy util (like rsync) but that will take account of data already copied (in this sessio

    - by Rory McCann
    Let's say I have a directory with 2 files, both are identical and quite large (e.g. 2GB ea.) I want to rsync that directory to a remote host. As I understand it (and I could be wrong), rsync calculates checksums of files. Surely if it sees 2 files with the same checksum it can just copy the first file, then do a local copy on the remote host for the 2nd file? That would make it faster, no? On a similar note, doesn't rsync hash all the remote files before copying? If it saw a different file with the same hash as a file that was to transfered, it could do a local copy on the remote host. Does rsync support this sort of thing? Is there some way to turn it on? Is there a tool similar to rsync that will do this sort of 'hash based' local copies?

    Read the article

  • method for transfering large files for newbies

    - by doug
    Hi there One of my friends is now in china and he wana send me his home-mode video files. I have a linux hosting account on godaddy and i've configured a ftp account for him. Unfortunately he has trouble in using the ftp account. Can you recommend a better option? TY

    Read the article

  • Painless way of consolidating files across multiple machines/OSes

    - by 5arx
    Just bought a NAS. So I thought I'd get all our photos, media files and pdfs consolidated, de-duplicated, de-junked and virus-checked and stick them all on it. We have 3 laptops, one running Windows, the others OSX. We have a file server running Windows - it was the result of an earlier attempt at a networked fileserver - and a Mac Pro that is also kind of a server (previous attempts at this job have resulted in most of our stuff being on it). Also memory cards/sticks, cd backups and so on. I would be grateful if anyone could suggest a strategy or, ideally, tool(s) I could use to solve this problem. It is probably no more than one or two terabytes of data in total, but I can imagine that going through it all manually, file-by-file may well drive me insane.

    Read the article

  • Ubuntu: Network connection seems to fail after some time

    - by chrischu
    I just bought a Shuttle XS-35 barebone mini-PC and put a 1 TB WD hard drive and 2 Gigs of RAM into it and installed Ubuntu onto it. The machine will post as a media server (streaming videos to my PS3) and as a webserver for some small private projects. Now I wanted to copy my videos from my Windows 7 machine to the Ubuntu machine and therefore created a Samba share on the Ubuntu machine. I tried copying the files with the standard Windows copy function and with SyncToy but after some time (sometimes 5 copied files, sometimes 120 copied files) the Samba share just disappears. When that happens I can't reach the internet from the Ubuntu machine although the network connection still seems to be fine (IP still there etc.). Between the machines lies a LinkSys router. When I try to ping my router (after the connection doesn't work anymore) from the Ubuntu machine only a very small subset of the packages actually get there (something around 20%). When I restart the Ubuntu machine everything seems to work normal again. I have no idea where the problem lies here. Does anybody have a clue? Thanks in advance!

    Read the article

  • Moving Exchange .EDB and .STM file to other partition

    - by Jorge Fernandez
    Im trying to move my exchange mailbox store to a new partition and i keep running into an error message saying: "cannot copy insufficient system resources exist to complete the requested service." The server is a Dell Poweredge 2850 with Dual Xeon Processors @ 3.00GHz and 4GB of ram. Running Win Server 2K3 R2 SP2 with Exchange 2K3 Standard. The Store is around 55GB any ideas. I want to get exchange on its on partition since I need to free up some space on the partition its currently on.

    Read the article

  • What's the easiest way to duplicate a portion of a directory structure onto an external drive?

    - by Jon Cage
    I'm trying to move a large chunk of data from one of our servers onto an external drive for delivery to Amazon glacier storage. To do that, I'd like to copy a chunk of the server, preserving the directory structure. I.e. move this: \\MyServer\Some\Longwinded\Path\TheDataIWantToCopy \\MyServer\Some\Longwinded\Path\TheDataIWantToCopy\First bit of data\DataFile1.dat to this: D:\ D:\First bit of data\DataFile1.dat

    Read the article

  • DOS application to allow remote management of files over serial link

    - by tomlogic
    Harken back to the days of DOS. I have an embedded DOS handheld device, and I'm looking for a tool to manage the files stored on it. I picture an application I can launch on the device that opens COM1 up for commands to get a directory listing, send/receive files via x/y/zmodem, move/delete files, and create/move/delete directories. A Windows application can then download a recursive file listing and then manage those files (for example, synchronizing with a local directory). Keep in mind that this is DOS -- 8.3 filenames, 640K of RAM and a 19200bps serial link (yuk!). I'd prefer something with source in case we need to add additional features (for example, the ability to get a checksum of a file for change detection). Now that I've written this description, I realize I'm asking for something like LapLink or pcAnywhere. Norton no longer sells DOS versions of pcAnywhere and LapLink V for DOS seems pricy at $50. Are you aware of any similar apps from those good old days?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >