Search Results

Search found 82718 results on 3309 pages for 'large file download'.

Page 120/3309 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Bidirectional real-time sync of large file tree between two distant linux servers

    - by dlo
    By large file tree I mean about 200k files, and growing all the time. A relatively small number of files are being changed in any given hour though. By bidirectional I mean that changes may occur on either server and need to be pushed to the other, so rsync doesn't seem appropriate. By distant I mean that the servers are both in data centers, but geographically remote from each other. Currently there are only 2 servers, but that may expand over time. By real-time, it's ok for there to be a little latency between syncing, but running a cron every 1-2 minutes doesn't seem right, since a very small fraction of files may change in any given hour, let alone minute. EDIT: This is running on VPS's so I might be limited on the kinds of kernel-level stuff I can do. Also, the VPS's are not resource-rich, so I'd shy away from solutions that require lots of ram (like Gluster?). What's the best / most "accepted" approach to get this done? This seems like it would be a common need, but I haven't been able to find a generally accepted approach yet, which was surprising. (I'm seeking the safety of the masses. :) I've come across lsyncd to trigger a sync at the filesystem change level. That seems clever though not super common, and I'm a bit confused by the various lsyncd approaches. There's just using lsyncd with rsync, but it seems this could be fragile for bidirectionality since rsync doesn't have a notion of memory (eg- to know whether a deleted file on A should be deleted on B or whether it's a new file on B that should be copied to A). lipsync appears to be just a lsyncd+rsync implementation, right? Then there's using lsyncd with csync2, like this: http://www.axivo.com/community/threads/lightning-fast-synchronization-with-csync2-and-lsyncd.121/ ... I'm leaning towards this approach, but csync2 is a little quirky, though I did do a successful test of it. I'm mostly concerned that I haven't been able to find a lot of community confirmation of this method. People on here seem to like Unison a lot, but it seems that it is no longer under active development and it's not clear that it has an automatic trigger like lsyncd. I've seen Gluster mentioned, but maybe overkill for what I need? UPDATE: fyi- I ended up going with the original solution I mentioned: lsyncd+csync2. It seems to work quite well, and I like the architectural approach of having the servers be very loosely joined, so that each server can operate indefinitely on its own regardless of the link quality between them.

    Read the article

  • Will increasing RAM improve Lightroom 3 large tiff loading times

    - by andy
    Set up: mid 2009 17" unibody MacBook Pro 4GB RAM 2.66 Core 2 Duo Snow Leopard 10.6.6 Lightroom 3 When working with 12 MegaPixel RAW files from a Nikon D700, no problem. Lightroom is fine. Recently I've been scanning film and they result in large tiff files, about 130mb each. The tiff files themselves are good, and I'm happy with my scanning workflow. Working with these files in Lightroom is perfectly fine, except for one step. When I choose one of these photos in the Develop module, Lightroom displays the "Loading" on the image for about a minute or two, which is quite long. Once the image is loaded, then everything is fine again, and applying effects is instant. So my only issue is reducing that "loading" time in the develop module (the library module is fine too). Will increasing my RAM to 8GB help? I'm worried about spending the money and it not making any difference. thanks andy

    Read the article

  • Automatically save/download e-mail body to disk

    - by CatamountJack
    Is there a program that will allow me to connect to my mail server (IMAP) and automatically save certain new e-mails to disk? Multiple times a day I receive automated e-mail updates about pending jobs from a system that processes some information for us. The data in these e-mails is written as plain-text within the body of the message. I would like to download the newest message, parse it, and display it on my desktop. The last two parts I can manage ok - it's just the automatic downloading that is posing a challenge. I don't use Outlook (I do use Thunderbird), but would prefer not to have the client open to make this happen. I'm currently running Win7.

    Read the article

  • Downloads on Vista Home Premium start off fast but slow down to 0 Kb/s and hang

    - by user66265
    I have Windows Vista Home Premium on my computer and everytime I go to download something, it starts out at about 1.5 Mb/s and stays there for about 3 seconds, then it slows down to 800 Kb/s and continues to drop until it gets down to 0 Kb/s and hangs. I've tried just about everything I can find such as uninstalling all firewalls/antivirus, doing the netsh rss,autotune, and chimney disable, and updating everything but it still continues to happen. I'd prefer not to reinstall but if I have to then I have to... EDIT: Figured it out, the router needed a firmware update

    Read the article

  • Booting large ISO through PXE

    - by Devator
    I currently have a FOG server (which works perfectly fine) and I'm trying to boot Windows 7 through it (with memdisk). But, since the ISO is rather large (more than 6 GB) it will try to put the ISO into memory and then boot however it crashes with the error message not enough memory to load specified image. The systems here don't have 6 GB of RAM so I need another way to boot it. I am aware of WDS and SCCM, however I want todo this with FOG. Is there any way to boot the ISO and install Windows through FOG?

    Read the article

  • Adobe Volume License for Indesign Digital Download Location?

    - by elistp
    We recently purchased a volume license for Adobe Indesign from Dell. We received an e-mail for the order that contains the serial #. However, there is no information on how to obtain Adobe Indesign from the Adobe licensing portal. This is our first time dealing with Adobe volume licensing so I'm a bit lost as to what we're suppose to do. I've googled around a little and found an Adobe License Portal but I do not have access to it. Does anyone with experience concerning Adobe volume licensing have any idea what we're suppose to do to get a download of our purchase?

    Read the article

  • Good/Better config for MySQL on an EC2 Large Instance

    - by Tim Reynolds
    I have an EC2 Large instance dedicated to MySQL. It will be serving a Joomla/Magento combo so it has a blend of InnoDB and MyISAM tables. I have only worked with MyISAM in the past and am therefore unfamiliar with the settings InnoDB uses. Experiments so far have been less than fruitful, as I keep causing the InnoDB engine to be disabled. My instance is running Ubuntu 10.04 64 bit server edition and has ~7.5G of ram. MySQL is currently using ~0.6% of that, with somewhat poor performance. I would like to configure it to use as much of the system RAM as is reasonable. Testing some settings I learned that the InnoDB logs can't collectively be larger than 4G. Would anyone be able to provide some base InnoDB and MyISAM settings to get my started. Thank you Tim

    Read the article

  • Prevent Windows Live Mail to download all messages from IMAP

    - by m8t
    Hello, Recently I'm trying the Window Live Mail client. Simple and beautiful. I have set up an IMAP account, and I'm used that a client only downloads headers. However Windows Live Mail automatically creates a list of tasks to download all messages from all directories when you are closing the client. Is it possible to avoid this? It's a good and a bad thing. You can work offline and you have a backup, but it takes extremely long to perform, in fact I have about hundred of thousand of emails. This task can take a whole day to perform. After looking in the settings I don't see anything special, maybe you have an idea? Thank you Mike

    Read the article

  • where to download emacs manuals as offline html files

    - by Jisang Yoo
    When you press C-h i in Emacs, it shows what's called the top of the INFO tree, and it links to all kinds of manuals: AUCTeX, Org Mode, Emacs, Emacs FAQ, Emacs Lisp Intro, Elisp, ... . Is there a place where I can download all of them at once as html files? GNU Home page has links to some of them in html format: http://www.gnu.org/software/emacs/manual/elisp.html_node.tar.gz http://www.gnu.org/software/emacs/manual/emacs.html_node.tar.gz But I cannot find a link to a single tar.gz file packing all of them.

    Read the article

  • Win 7 accessing large files uses 100% RAM

    - by user181276
    Running Win 7 64-bit SP1 with 8 GB RAM. I first noticed this problem when using the GUI to copy some large (5+ GB) files from one disk to another. What happens is the physical memory in use rises quite quickly to 100% and the system comes to a crawl. If I just start to access the file in a media player (it is a movie) the memory usage climbs up slowly but eventually reaches 100%. When copying the same files via XCOPY I do not have this problem. Using RAMMAP I see most of the memory usage is under "Mapped File" and is allocated under the "Active" column. If I select "Empty System Working Set" the RAM usage drops back down but then starts to climb back up. Any ideas on what I can check/test to eliminate this issue?

    Read the article

  • Classic ASP on large memory server

    - by Steve Evans
    I have a client with a large ASP app that apparently is fairly memory intensive. I’m helping them migrate to new hardware they have running Win2k8 R2. They have 4 physical servers with 32gb of RAM each. I’m making the assumption that ASP apps run as a x32 process. So I see that we have two options: On the application pool enable web gardens. Use the physical servers as VM hosts and split the box into say 4 web servers each. Any thoughts on which path will provide us better performance? I’m just not really sure how ASP will handle a machine with lots of memory, and I’m worried it won’t really be able to address the memory well. (you can ignore all the obvious stuff like increased maintenance of 16 web servers vs 4, or the flexibility virtualization gets us over physical servers, etc)

    Read the article

  • Apache, Django with mod_wsgi, and large request buffering

    - by Mukul
    In my setup of Apache 2.2 MPM worker and Django 1.3 with mod_wsgi 2.8, I need to support large POST request payloads. The problem is that when there are many such simultaneous requests, Apache uses up all the memory in the system and then crashes. It seems that Apache is buffering the requests completely in memory before executing the WSGI handler and passing it the request. Is there any way to control request buffering in Apache? The log shows the following error whenever the crash happens: [Wed Jun 29 18:35:27 2011] [error] cgid daemon process died, restarting Here's my virtual host's configuration: <VirtualHost *:8080> ServerName example.com ErrorLog /var/log/apache2/error.log WSGIScriptAlias / <path to django.wsgi> WSGIPassAuthorization on WSGIDaemonProcess example.com WSGIProcessGroup example.com XSendFileAllowAbove on XSendFile on </VirtualHost>

    Read the article

  • Norton Security Suite Symantec Download Manager Error: "Error writing to disk"

    - by Stephen Pace
    My broadband provider (Comcast) decided to switch their 'included with service' security suite from McAfee to Norton Security Suite. Their email directed me to a site that downloaded the Symantec Download Manager (NortonDL.exe) and that went fine. I'm running Windows 7 32-bit and running this application pops up the standard User Account Control message and the software is correctly identified as coming from Symantec. I answer 'yes' to allow the software to install and upon launch immediately get an "Error writing to disk" error. I searched the Internet for this error, but mainly I find Comcast users complaining about the same issue with no resolution other than to call Symantec. I found no one suggesting a successful workaround and it appeared that most of the support calls took up to three hours. I'd like to avoid that if possible. Ideas? To be honest, I'm getting close to bagging this installation and just moving to Microsoft Security Essentials.

    Read the article

  • Large Users Profile - Windows 7 - Machine running slowly

    - by Richard
    I have the MD of a client of ours who has a Windows 7 Profile that is currently 14GB thanks to Videos/Music and Documents. The first thing we did was to switch from roaming to local. What I need to know is now the profile is local am I wasting my time by reducing it any further? Does it really make a difference to performance having a large local user profile? Only the 4GB outlook ost that talks to the network frequently. Thanks in advance.... Richard

    Read the article

  • secure synchronization of large amount of data

    - by goncalopp
    I need to automatically mirror a large amount (terabytes) of files in two unix machines over a slow link (1 Mbps). This needs to be done frequently, but the data doesn't change too much (delta transmission doesn't saturate the link). The usual solution would be rsync, but there's an additional requirement: it's undesirable, from a security standpoint, that either the source or destination machines have (keyless) ssh keys to each other, or any kind of filesystem access. All communication between the two machines should thus be initialized (and mediated) through a third machine. I've asked a separate question about rsync in particular here. Are there other obvious solutions I'm missing?

    Read the article

  • Open source command line tools for indexing a large number of text files

    - by ergosys
    I'm looking for any open source command line tool or tools which will allow me to index and search a large number of plain text files. Approximate search would be a plus. The tool only needs to print the files that match, although some match context would be useful. A GUI tool isn't useful for my application, nor is anything that searches files one by one (grep for example). I'm basically targeting unix platforms (osx, linux, bsd). EDIT: I'm not interested in any sort of tool that is system-wide, or needs to run in the background. Basically, I want to build an index for a directory tree full of text files and then later be able to search against it. Preferably the index is one or a few files that I can specify the location of. Any ideas?

    Read the article

  • How to Shrink large Hyper-V VM

    - by autrevo
    Using Disk2VHD utility I converted my bare-metal OS into Hyper-V VHD - http://technet.microsoft.com/en-us/sysinternals/ee656415.aspx And I could obtain a huge 190GB VHD file. Apart from performance issues, this VHD worked fine as guest when hosted on Windows Server 200 R2, Hyper-V. Having realized need to keeping only system files and application installations on VHD. I have deleted most of the junk data from this VHD and now it contains only 20-25 GB. But I am not able to shrink the VHD VM. Having done some research, I came to know, this as a limitation of .VHD files. Subsequently I followed these two step using Edit Virtual Hard Wizard on Windows 2012 Box. Convert from VHD to VHDX (took close to 3 hrs.) Compact (Another 4 hrs.) This did not ever shrink the VHDX either. Does Hyper-V does not provide proper support to handle large VHDs or VHDXs whose size are the range of 200GB.

    Read the article

  • How To Speed Up Adding Column To Large Table In Sql Server

    - by Chris
    I want to add a column to a Sql Server table with about 10M rows. I think this query would eventually finish adding the column I want: alter table T add mycol bit not null default 0 but it's been going for several hours already. Is there any shortcut to get a "not null default 0" column inserted into a large table? Or is this inherently really slow? This is Sql Server 2000. Later on I have to do something similar on Sql Server 2008.

    Read the article

  • Why does a pdf file download result in varying bytes logged, all with sc-status 200

    - by Pat James
    I have a mojoportal CMS installation on an IIS7 server where users are reporting problems downloading a pdf file. It always downloads fine for me and most others, either displaying in browser or in Adobe Reader. Using logparser to query the IIS logs, all the responses are status 200 (OK) or 304 (Not modified), but the bytes sent vary quite a bit. Sometimes zero, some 211, some about half the full file size of 27059, and lots in between. Plenty show the full size of 27059. Do these other entries for smaller byte counts represent errors of some kind, correlating with the problems reported? Is this likely to be a browser/client issue or a server side problem? If there is any other info that would be helpful let me know. This is a shared hosting server though so I am somewhat limited in what I can dig into on the server.

    Read the article

  • CentOS Insufficient space in download directory /var/cache/yum/base/packages

    - by Joao Heleno
    Hello! I was trying to yum install libpcap when I got Error Downloading Packages: 14:libpcap-0.9.4-15.el5.i386: Insufficient space in download directory /var/cache/yum/base/packages * free 0 * needed 108 k Here's output from df -h: Filesystem Size Used Avail Use% Mounted on /dev/sda1 20G 19G 0 100% / /dev/sda3 202G 38G 154G 20% /home tmpfs 1.5G 0 1.5G 0% /dev/shm And fdisk -l: Disk /dev/sda: 250.0 GB, 250000000000 bytes 255 heads, 63 sectors/track, 30394 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 2611 20972826 83 Linux /dev/sda2 2612 3251 5140800 82 Linux swap / Solaris /dev/sda3 3252 30394 218026147+ 83 Linux I have launched yum clean all with no success clearing up space. Please advise. Thanks.

    Read the article

  • Storing large amounts of small files into bigger files on Windows

    - by asmo
    Let's say I have 50 GiB of files that weights around 500 KiB each. My guess is that having, for example, 5 large files of 10 GiB each with the same content archived in them would be better for hard drive performance. Am I correct? Will there be a noticeable gain on an NTFS filesystem? ===================================================================== Finally, which tool could I use to group the files together while retaining the ability to modify the content of the archive with zero or minor performance loss? For example, I like TrueCrypt archiving because after mounting an archive file, it creates a drive which I can use seamlessly as if it was a normal drive. The only thing with TrueCrypt is that I don't need encryption/compression, only archiving.

    Read the article

  • design a large scale network for an organization

    - by Essam
    i want to design a large scale network for an organization with HQ and two branches. i want to use a class A subnet. if i am using the network address 30.0.0.0 for the whole organization how can it be different from another organization company or whatever which is using the same address in another country? now i have the three locations for this organization,so i need 5 subnets [one for the HQ,two for branch A and branch B , one for connecting A to HQ and one for connecting branch B with HQ since i will use central DHCP server at the HQ,is that (number of subnetting) right? is it advisable to use class A or class B for this organization it term of address that will be wasted (let's say it is a university with two branches in two different states)?!

    Read the article

  • How to make iTunes download all podcast episodes irrespective of listened/not

    - by user15660
    I am a big iPod user (especially podcasts). I have used iPod touch, iPod classic and iPod nano. I noticed that iTunes stops downloading podcasts that you no longer listen to and stops downloading them. Is there a way to force iTunes to download all episodes of podcasts irrespective of whether you listen to them or not? I am not interested in just clicking a episode to make iTunes think that I'm listening. I need some kind of programmatic work around or batch script that runs and updates all podcasts/episodes automatically.

    Read the article

  • Installing maven on Ubuntu by manual download

    - by WebDevHobo
    To install Maven, I downloaded the latest version from the website and then followed these steps: http://maven.apache.org/download.html#Installation The last step, the version control, does not work. It says that 'mvn' is currently not installed and that I should type sudo apt-get install maven2 If I go directly to the mvn file itself, it does work: root@ubuntu:~# /usr/local/apache-maven/apache-maven-2.2.1/bin/mvn --version Apache Maven 2.2.1 (r801777; 2009-08-06 12:16:01-0700) Java version: 1.6.0_21 Java home: /usr/java/jdk1.6.0_21/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux" version: "2.6.32-25-generic" arch: "i386" Family: "unix" So, what am I doing wrong here? Or what would and apt-get install do extra that I might have forgotten?

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >