Search Results

Search found 18409 results on 737 pages for 'large projects'.

Page 86/737 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Copying a large directory tree locally? cp or rsync?

    - by Rory
    I have to copy a large directory tree, about 1.8 TB. It's all local. Out of habit I'd use rsync, however I wonder if there's much point, and if I should rather use cp. I'm worried about permissions and uid/gid, since they have to be preserved in the clopy (I know rsync does this). As well as thinks like symlinks. The destination is empty, so I don't have to worry about conditionally updating some files. It's all local disk access, so I don't have to worry about ssh or network. The reason I'd be tempted away from rsync, is because rsync might do more than I need. rsync checksums files. I don't need that, and am concerned that it might take longer than cp. So what do you reckon, rsync or cp?

    Read the article

  • [Ruby] How can I randomly iterate through a large Range?

    - by void
    I would like to randomly iterate through a range. Each value will be visited only once and all values will eventually be visited. For example: (0..9).sort_by{rand}.map{|x| f(x)} where f(x) is some function that operates on each value. A Fisher-Yates shuffle could be used to increase efficiency, but this code is sufficient for many purposes. My problem is that sort_by will transform the range into an array, which is not cool because I am working with astronomically large numbers. Ruby will quickly consume a large amount of RAM trying to create a monstrous array. This is also why the following code will not work: tried = {} # store previous attempts bigint = 99**99 bigint.times { x = rand(bigint) redo if tried[x] tried[x] = true f(x) # some function } This code is very naive and quickly runs out of memory as tried obtains more entries. What sort of algorithm can accomplish what I am trying to do?

    Read the article

  • How can I persist a large Perl object for re-use between runs?

    - by Alnitak
    I've got a large XML file, which takes over 40 seconds to parse with XML::Simple. I'd like to be able to cache the resulting parsed object so that on the next run I can just retrieve the parsed object and not reparse the whole file. I've looked at using Data::Dumper but the documentation is a bit lacking on how to store and retrieve its output from disk files. Other classes I've looked at (e.g. Cache::Cache appear designed for storage of many small objects, not a single large one. Can anyone recommend a module designed for this? EDIT. The XML file is ftp://ftp.rfc-editor.org/in-notes/rfc-index.xml On my Mac Pro benchmark figures for reading the entire file with XML::Simple vs Storable are: s/iter test1 test2 test1 47.8 -- -100% test2 0.148 32185% --

    Read the article

  • Best practice for handling memory leaks in large Java projects?

    - by knorv
    In almost all larger Java projects I've been involved with I've noticed that the quality of service of the application degrades with the uptime of the container. This is most probably due to memory leaks in the code. The correct way to solve this problem is obviously to trace back to the root cause of the problem and fix the leaks in the code. The quick and dirty way of solving the problem is simply restarting Tomcat (or whichever servlet container you're using). These are my three questions: Assume that you choose to solve the problem by tracing the root cause of the problem (the memory leaks), how would you collect data to zoom in on the problem? Assume that you choose the quick and dirty way of speeding things up by simply restarting the container, how would you collect data to choose the optimal restart cycle? Have you been able to deploy and run projects over an extended period of time without ever restarting the servlet container to regain snappiness? Or is an occasional servlet restart something that one has to simply accept?

    Read the article

  • how to model editing of multiple related resources on the same webpage?

    - by amikazmi
    Lets say we have a Company model, that has many Employees And has many Projects If we want to show the projects, we'll go to "/company/1/projects/index" If we want to edit 1 project, we'll go to "/company/1/projects/1/edit" What if we want to edit all the projects at once on the same webpage? We can go to "/company/1/edit" and put a nested forms for all the projects But what if we need a different webpage to edit all the employees at once too? We can't use "/company/1/edit" again.. Right now we do "/company/1/projects/multiedit", "/company/1/projects/multupdate"- but as you can see, it's not rest. How can we model this restfully?

    Read the article

  • Utility for notifying a user that their roaming profile is getting too large to copy before shutdown?

    - by leeand00
    My users are having an issue with their roaming profiles getting too large and then their roaming profile is lost. I believe this is because this is because they are storing too much in their roaming profiles. Is there a program that can be installed in Windows, that will: Listen for a logoff event Check the size of their Roaming Profile against a size limit I set... If the roaming profile is too big, it will notify the user that they have to decrease the size of the profile. Does a program like this exist or does it need to written?

    Read the article

  • waitin closes browsers for all projects that are building.

    - by Scooter
    I'm having an issue running WatiN under CruiseControl.net, where on a .forceclose, watin is closing all open browser instances. I have multiple projects running under cruisecontrol, and its not uncommon for some of those projects to be building and testing at the same time. There has been more than one occasion where watin will close the browser window for a different project, causing it to fail. In my local tests, creating my watin instance under a new process fixes this issue. But running under cruisecontrol, when doing this, I lose my IE object: Object reference not set to an instance of an object. Running CC.net as a service CC.Net server is Windows 2003 IE6 Any thoughts?

    Read the article

  • How do I upload large (30MB) files via a web interface?

    - by Dan
    Because I'm stumped... The client needs to be able to upload large images to a library but the upload fails after 5-6MB (over my poor connection). It seems to be timing out as the filesize at fail isn't consistent. The setup is a form which is accepted by PHP. I've googled and played with php.ini and everything is set for big uploads and long timeouts. Platform is a dedicated windows server at GoDaddy. What's going wrong?

    Read the article

  • How much ram to be able to convert large (5-6MB) jpegs? [closed]

    - by cosmicbdog
    I've got a project where we want to be processing large jpegs (5-6MB) with apache and php (using GD library). My understanding is that the server converts the image into a BMP making it quite ram heavy and currently we're unable to do it with our 1gb of memory. Here's the error we get: Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 17408 bytes) How much ram should we be looking at running with to process images of this size? Edit: As Chris S the purist highlighted below, my post is apparently vague. I am doing the most basic and common manipulation of an image, say turning it from a 4352px x 3264px jpg of 5mb in size, to a 900px x 675px file.

    Read the article

  • How do I split a large MySql backup file into multiple files?

    - by Brian T Hannan
    I have a 250 MB backup SQL file but the limit on the new hosting is only 100 MB ... Is there a program that let's you split an SQL file into multiple SQL files? It seems like people are answering the wrong question ... so I will clarify more: I ONLY have the 250 MB file and only have the new hosting using phpMyAdmin which currently has no data in the database. I need to take the 250 MB file and upload it to the new host but there is a 100 MB SQL backup file upload size limit. I simply need to take one file that is too large and split it out into multiple files each containing only full valid SQL statements (no statements can be split between two files).

    Read the article

  • Why does moving large folders take a lot of time?

    - by acidzombie24
    What can i do to fix this? Drop permission properties? I have a large folder with 100k files. I moved it into my archive folder and its taking forever to move. Why is that? I know on XP it takes <1sec but not on windows 7. I am sure its a permission thing, is there a way i can disable it and make it faster? -edit- I am moving the folder into another in the same drive/partition. In XP. AFAIK it just moves the folder file from one place to another. In windows 7, it seems like its touching something in every file when i move it.

    Read the article

  • Recommand a Perl module to persist a large object for re-use between runs?

    - by Alnitak
    I've got a large XML file, which takes 40+ seconds to parse with XML::Simple. I'd like to be able to cache the resulting parsed object so that on the next run I can just retrieve the parsed object and not reparse the whole file. I've looked at using Data::Dumper but the documentation is a bit lacking on how to store and retrieve its output from disk files. Other classes I've looked at (e.g. Cache::Cache) appear designed for storage of many small objects, not a single large one. Can anyone recommend a module designed for this?

    Read the article

  • How long does it take in practice to warm up large in-memory databases?

    - by Sim
    Companies such as Peak Hosting are offering 64 core machines with 512Gb RAM for $2K/month. This is a very interesting choice for in-memory databases such as Memcached/Redis as well as databases whose performance degrades rapidly when the data & indexes don't fit in RAM, such as MongoDB. My main concern with monster machines such as these is the time it takes to warm up an in-memory database. In my experience, theoretical metrics, e.g., that SATA can load 100Mb/sec, fall short of what happens in practice. Even at that rate, 100Mb/sec means that loading up 512Gb RAM machine from SATA disks can take over 1 1/2 hours (!). I am looking for real-world reports of warm-up times for machines with very large memory. Please, share details of the software on the machine, data size, storage configuration, e.g., SATA or SSD, network, hosting/cloud provider, if relevant, etc.

    Read the article

  • How to disambiguate subdirs with the same name in the Projects list?

    - by jlstrecker
    My Qt project has 2 subdirs/subprojects with the same name. Their directories are myproject/node and myproject/compiler/test/node. The problem (or annoyance) is that, in the Projects list in Qt, both subdirs are listed as "node". So you have to open them up to figure out which is which. myproject.pro is like this: TEMPLATE = subdirs QMAKE_CLEAN = Makefile SUBDIRS += \ compiler_test_node \ node \ ... compiler_test_node.subdir = compiler/test/node node.depends = compiler_vuo_compile ... Without renaming the myproject/compiler/test/node directory, is there a way to make it show up with a different name in the Projects list?

    Read the article

  • How to find the remainder of large number division in C++?

    - by Beelzeboul
    Hello, I have a question regarding modulus in C++. What I was trying to do was divide a very large number, lets say for example, M % 2, where M = 54,302,495,302,423. However, when I go to compile it says that the number is to 'long' for int. Then when I switch it to a double it repeats the same error message. Is there a way I can do this in which I will get the remainder of this very large number or possibly an even larger number? Thanks for your help, much appreciated.

    Read the article

  • Should I store my code/projects on my SSD or my secondary drive?

    - by user37467
    I just got a new box. It has an SSD for the primary drive, and a 1TB SATA for the secondary drive. I'm going to run windows and my binaries on the SSD and keep all my downloads/documents/music/etc on the secondary drive. My question is should I also keep my Visual Studio Projects and code on the SSD or keep them on the secondary drive? The faster SSD would presumably be better for compiling and indexed searches, but would it be better to keep it on the 2nd drive for a more parallel disk IO situation?

    Read the article

  • Complex knowledge management system with CRM..written internally

    - by JonH
    We've all heard of salesforce and sugarcrm and the likes of systems like this. Unfortunately at my workplace we have been asked to write a similiar system (rather then license or purchase). Basically the database is fairly large. Think of modules such as: Corporate groups, customers, programs, projects, sub projects, and issue management. In simple terms a corporate group has one to many customers. A program has one or more projects. A project has one or more sub projects. And an issue can be created on many sub projects. Of course the system is a bit more complex but instead of listing every single module I think its best to keep it simple. In any event, the system in its current state has only two resources to be working on it (basically we have to do it all: CSS, database, jquery, asp.net and C#). We've started off well by defining the UI master and footer pages that way we can reuse those across all of our pages. Now comes the hard part. The system will have about 4k end users with say 5-10% being concurrent users. We are wondering if it makes sense to cache our database data (For say 5-10 minutes) rather then continously hit our database. The reason being is some of these pages may have 5-10 search filters associated with the page. Imagine every time a selection is made from a search box how many database hits. Also some of these search fields cascade so selecting for instance an initial drop down may cascade several drop down boxes under them. Is it wrong to cache because I am not finding too many articles on whether it is a good idea or not. Remember the system is similiar to say a CRM system where we manage our various customers, projects, sub projects, issues, etc.

    Read the article

  • How can I do a large file upload using Sinatra, haml, nginx, and passenger?

    - by mmr
    Hi all, I need to be able to allow a user to upload 30-60 mb files at a time. Right now, I'm solving the problem with a simple form post: %form{:action=>"/Upload",:method=>"post",:enctype=>"multipart/form-data"} - @theModelHash.each do |key,value| %br %input{:type=>"checkbox", :name=>"#{key}", :value=>1, :checked=>value} =key %br %input{:type=>"file",:name=>"file"} %input{:type=>"submit",:value=>"Upload"} This form allows the user to select processing options contained in theModelHash and upload a file for processing. Problem is, this method both freezes the user's UI and also requires that the entire form be reposted when the user presses the 'back' button. I've looked at SWFUpload, but have no idea how to integrate that into my relatively simple app. There's a page here about integrating it with Rails, but I'm using Sinatra, and am new enough to this whole web programming thing that I don't know how to modify those files to work with what I need to do. Is there a how-to to add large file uploads to my form there? Something relatively simple that just adds in a progress bar and doesn't repost? I feel like I'm having to triple the size of my application just to make this feature play nice, and that's bothering me a bit.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >