Search Results

Search found 38088 results on 1524 pages for 'large scale project'.

Page 133/1524 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Recommended migration strategy for C++ project in Visual Studio 6

    - by jacobsee
    For a large application written in C++ using Visual Studio 6, what is the best way to move into the modern era? I'd like to take an incremental approach where we slowly move portions of the code and write new features into C# for example and compile that into a library or dll that can be referenced from the legacy application. Is this possible and what is the best way to do it?

    Read the article

  • Best practice- How to team-split a django project while still allowing code reusal

    - by Infinity
    I know this sounds kind of vague, but please let me explain- I'm starting work on a brand new project, it will have two main components: "ACME PRODUCT" (think Gmail, Meebo, etc), and "THE SITE" (help, information, marketing stuff, promotional landing pages, etc lots of marketing-induced cruft). So basically the url /acme/* will load stuff in the uber cool ajaxy application, and every other URI will load stuff in the other site. Problem: "THE SITE" component is out of my hands, and will be handled by a consultants team that will work closely with marketing, And I and my team will work solely on the ACME PRODUCT. Question: How to set up the django project in such a way that we can have: Seperate releases. (They can push new marketing pages and functionality without having to worry about the state of our code. Maybe even separate Subversion "projects") Minimize impact (on our product) of whatever flying-unicorns-hocus-pocus the other team codes into the site. Still allow some code reusal. My main concern is that the ACME product needs to be rock solid, and therefore needs to be somewhat isolated of whatever mistakes/code bloopers the consultants make in their marketing side of the site. How have you handled this? Any ideas? Thanks!

    Read the article

  • Processing large recordsets in Rails

    - by japancheese
    Hello, I'm trying to perform a daily operation on a larger than normal dataset (2m+ records). However, Rails seems to take a very long time performing operations on such a dataset. Operations like Dataset.all.each do |data| ... end take a very long time to complete (I assume this is because it can't fit all the items into memory at once, right?). Does anyone have any strategies on how I could handle this situation? I know SQL would probably speed up the process, but I'm looking to use the Rails environment as I can do many more complicated things to the data than I can with just SQL statements.

    Read the article

  • Handling very large lists of objects without paging?

    - by user246114
    Hi, I have a class which can contain many small elements in a list. Looks like: public class Farm { private ArrayList<Horse> mHorses; } just wondering what will happen if the mHorses array grew to something crazy like 15,000 elements. I'm assuming that trying to write and read this from the datastore would be crazy, because I'd get killed on the serialization process. It's important that I can get the entire array in one shot without paging, and each Horse element may only have two string properties in it, so they are pretty lightweight: public class Horse { private String mId; private String mName; } I don't need these horses indexed at all. Does it sound reasonable to just store the mHorse array as a raw Text field, and force my clients to do the deserialization? Something like: public class Farm { private Text mHorsesSerialized; } then whenever the client receives a Farm instance, it has to take the raw string of horses, and split it in order to reinstantiate the list, something like: // GWT client perhaps Farm farm = rpcCall.getMyFarm(); String horsesSerialized = farm.getHorses(); String[] horseBlocks = horsesSerialized.split(","); for (int i = 0; i < horseBlocks.length; i++) { // .. continue deserializing the individual objects ... } yeah... so hopefully it would be quick to read a Farm instance from the datastore, and the serialization penalty is paid by the client, Thanks

    Read the article

  • utl_file.FCLOSE() is slow with large files

    - by Dan
    We are using utl_file in Oracle 10g to copy a blob from a table row to a file on the file system and when we call utl_file.fclose() it takes a long time. It's a 10mb file, not very big, and it takes just over a minute to complete. Anyone know why this would be so slow? Thanks

    Read the article

  • Tools for viewing logs of unlimited size

    - by jkff
    It's no secret that application logs can go well beyond the limits of naive log viewers, and the desired viewer functionality (say, filtering the log based on a condition, or highlighting particular message types, or splitting it into sublogs based on a field value, or merging several logs based on a time axis, or bookmarking etc.) is beyond the abilities of large-file text viewers. I wonder: Whether decent specialized applications exist (I haven't found any) What functionality might one expect from such an application? (I'm asking because my student is writing such an application, and the functionality above has already been implemented to a certain extent of usability)

    Read the article

  • sending large data .getJSON or proxy ?

    - by numerical25
    Hey guys. I was told that the only trick to sending data to a external server (i.e x-domain) is to use getJSON. Well my problem is that the data I am sending exceeds the getJSON data limit. I am tracking mouse movements on a screen for analytics. Another option is I could also send a little data at a time. probably every time the mouse moves. but that seems as if it would slow things down. I could setup a proxy server. My question is which would be better? Setting up a proxy server ? or Just sending bits of information via javascript or JQUERY. What do the professionals use (Google and other company's that build mash-ups that send a lot of data to x-domain sites.) I need to know the best practices. Thanx!! Also the data is put into JSON.

    Read the article

  • Enumerating large (20-digit) [probable] prime numbers

    - by Paul Baker
    Given A, on the order of 10^20, I'd like to quickly obtain a list of the first few prime numbers greater than A. OK, my needs aren't quite that exact - it's alright if occasionally a composite number ends up on the list. What's the fastest way to enumerate the (probable) primes greater than A? Is there a quicker way than stepping through all of the integers greater than A (other than obvious multiples of say, 2 and 3) and performing a primality test for each of them? If not, and the only method is to test each integer, what primality test should I be using?

    Read the article

  • How to manage large amounts of delegates and usercallbacks in C# async http library

    - by Tyler
    I'm coding a .NET library in C# for communicating with XBMC via its JSON RPC interface using HTTP. I coded and released a preliminary version but everything is done synchronously. I then recoded the library to be asynchronous for my own purposes as I was/am building an XBMC remote for WP7. I now want to release the new async library but want to make sure it's nice and tidy before I do. Due to the async nature a user initiates a request, supplies a callback method that matches my delegate and then handles the response once it's been received. The problem I have is that within the library I track a RequestState object for the lifetime of the request, it contains the http request/response as well as the user callback etc. as member variables, this would be fine if only one type of object was coming back but depending on what the user calls they may be returned a list of songs or a list of movies etc. My implementation at the moment uses a single delegate ResponseDataRecieved which has a single parameter which is a simple Object - As this has only be used by me I know which methods return what and when I handle the response I cast said object to the type I know it really is - List, List etc. A third party shouldn't have to do this though - The delegate signature should contain the correct type of object. So then I need a delegate for every type of response data that can be returned to the third party - The specific problem is, how do I handle this gracefully internally - Do I have a bunch of different RequestState objects that each have a different member variable for the different delegates? That doesn't "feel" right. I just don't know how to do this gracefully and cleanly.

    Read the article

  • Problems opening large csv file

    - by John Tyler
    I have a csv file that is 100mb in size. I need to parse some data out of it into a new format. I tried PHP, but keep running into memory issues. After around the first 150 "rows" or so, the script poops out. This is even on the localhost, and doing everything I can to tune the PHP settings, including max_memory and script_execution_time. Now before I continue, I'd like to know if Python will poop out on me too. Or if I will have to use C++. Can someone name good csv libraries for for these programmin langueage? The file is quoted csv. I mean scheiza I can't even open this text file in OpenOffice without it dying on me. (then again, Java sux as bad as PHP)

    Read the article

  • Serving large generated files using Google App Engine?

    - by John Carter
    Hiya, Presently I have a GAE app that does some offline processing (backs up a user's data), and generates a file that's somewhere in the neighbourhood of 10 - 100 MB. I'm not sure of the best way to serve this file to the user. The two options I'm considering are: Adding some code to the offline processing code that 'spoofs' it as a form upload to the blob store, and going thru the normal blobstore process to serve the file. Having the offline processing code store the file somewhere off of GAE, and serving it from there. Is there a much better approach I'm overlooking? I'm guessing this is functionality that isn't well suited to GAE. I had thought of storing in the datastore as db.Text or Dd.Blob but there I encounter the 1 MB limit. Any input would be appreciated,

    Read the article

  • Browsers (IE and Firefox) freeze when copying large amount of text

    - by Matt
    I have a web application - a Java servlet - that delivers data to users in the form of a text printout in a browser (text marked up with HTML in order to display in the browser as we want it to). The text does display in different colors, though most of it is black. One typical mode of operation is this: 1. User submits a form to request data. 2. Servlet delivers HTML file to browser. 3. User does CTRL+A to select all the text. 4. User does CTRL+C to copy all the text. 5. User goes to a text editor and does CTRL+V to paste the text. In the testing where I'm having this problem, step #2 successfully loads all the data - we wait for that to complete. We can scroll down to the end of what the browser loaded and see the end of the data. However, the browser freezes on step #3 (Firefox) or on step #4 (IE). Because step #2 finishes, I think it is a browser/memory issue, and not an issue with the web application. If I run queries to deliver smaller amounts of data (but after several queries we get the same data we would have above in one query) and copy/paste this text, the file I save it into ends up being about 8 MB. If I save the browser's displayed HTML to a file on my computer via File-Save As from the browser menu, it works fine and the file is about 22 MB. We've tried this on 2 different computers at work (both running Windows XP, with at least 2 GB of RAM and many GB of free disk space), using Firefox and IE. We also tried it on a home computer from a home network outside of work (thinking it might be our IT security software causing the problem), running Windows 7 using IE, and still had the problem. When I've done this, I can see whatever browser I'm using utilizing the CPU at 50%. Firefox's memory usage grows to about 1 GB; IE's stays in the several hundred MBs. We once let this run for half an hour, and it did not complete. I'm most likely going to modify the web app to have an option of delivering a plain text file for download, and I imagine that will get the users what they need. But for the mean time, and because I'm curious - and I don't like my application freezing people's browsers, does anyone have any ideas about the browser freezing? I understand that sometimes you just reach your memory limit, but 22 MB sounds to me like an amount I should be able to copy to the clipboard.

    Read the article

  • Yeoman 'grunt test' fails on clean project with 'port already in use'

    - by XMLilley
    With: Mac OS 10.8.4 Node 0.10.12 npm 1.3.1 grunt-cli 0.1.9 yo 1.0.0-rc.1 bower 0.9.2 [email protected] I encounter the following error with a clean yo angular project, followed by grunt server then grunt test: Running "connect:test" (connect) task Fatal error: Port 9000 is already in use by another process. I'm new to Yeoman and am stumped. I've deleted my original project and created a new one in a fresh folder just to make sure I wasn't overlooking any invisible configs. I restarted the machine to make sure I wasn't running any temporary server processes I had forgotten about. After all attempts, the basic server starts fine, attaches to Chrome, and the watcher updates the browser on any changes. (Notably, the server is running on 9000, which seems odd for the test-runner to also be trying to use 9000.) But I get that same error on attempting to start the test runner. Is this something I can fix, or an issue I should report to the Yeoman team? Thanks.

    Read the article

  • Best way to laod a large file into arraylist in java

    - by user1730833
    I have a file whose size is about 300mb i want to read the contents line by line and then add it into arraylist. So i have made an object of array list a1 , then reading the file using bufferedreader , after that when i add the lines from file into arraylist it gives an error Exception in thread "main" java.lang.OutOfMemoryError: Java heap space. Please tell me what should be the solution for this.

    Read the article

  • How to optimize indexing of large number of DB records using Zend_Lucene and Zend_Paginator

    - by jdichev
    So I have this cron script that is deployed and ran using Cron on a host and indexes all the records in a database table - the index is later used both for the front end of the site and the backed operations as well. After the operation, the index is about 3-4 MB. The problem is it takes a lot of resources (CPU: 30+ and a good chunk of memory) and slows the machine down. My question is about how to optimize the operation described below: First there is a select query built using the Zend Framework API, this query is then passed to a Paginator factory that returns a paginator which I am using to balance the current number of items being indexed and not iterate over too much items. The script is iterating over the current items in the paginator object using a foreach loop until reaching the end and then it starts from the beginning after getting items for the next page. I am suspecting this overhead is caused by the Zend_Lucene but no idea how this could be improved.

    Read the article

  • Representing a very large array of bits in little memory

    - by user614624
    Hello, I would like to represent a structure containing 250 M states(1 bit each) somehow into as less memory as possible (100 k maximum). The operations on it are set/get. I cold not say that it's dense or sparse, it may vary. The language I want to use is C. I looked at other threads here to find something suitable also. A probabilistic structure like Bloom filter for example would not fit because of the possible false answers. Any suggestions please?

    Read the article

  • deleting a large number of rows from a table

    - by Azeem
    We have a requirement to delete rows in the order of millions from multiple tables as a batch job (note that we are not deleting all the rows, we are deleting based on a timestamp stored in an indexed column). Obviously a normal DELETE takes forever (because of logging, referential constraint checking etc.). I know in the LUW world, we have ALTER TABLE NOT LOGGED INITIALLY but I can't seem to find the an equivalent SQL statement for DB2 v8 z/OS. Any one has any ideas on how to do this really fast? Also, any ideas on how to avoid the referential checks when deleting the rows? Please let me know.

    Read the article

  • firefox does not load large size images

    - by Pradeep
    I am stuck with a kind of bug in FF, wherein it’s unable to load images of big size (I have 8 MB size of image) from the server. The loading of image is all fine on IE. I am still looking out for ways to get rid of this problem. I changed server(IIS) settings to allow bigger file sizes. Also, I used “load” event on image using JQuery and tried all sort of options listed here http://api.jquery.com/load-event/, but nothing worked so far. If anyone of you has come across any such similar problem, and a way to resolve it, it would be nice to hear from you Please note: high resolution images are part of the requirement. Code : <style> img { background-color: #FFFFFF; background-image: url(http://eremurus.hyd:8080/QMS/plugin/imagepanner/loader.gif); background-repeat: no-repeat; background-position: center center; } </style> <script src="../plugin/jquery-ui-1.8.7.custom/js/jquery-1.4.4.min.js" type="text/javascript"></script> <script> jQuery(document).ready(function($){ ///var _url = "http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"; // set up the node / element _im =$("#main"); //_im.bind("load",function(){ $(this).fadeIn(); }); // set the src attribute now, after insertion to the DOM //_im.attr('src',_url); $("#main").one("load",function(){ alert('loaded'); }) .each(function(){ if(this.complete){ $(this).trigger("load"); } }); }); </script> </head> <body> <div id="target"><img id='main' src="http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"> </img></div> </body> </html>

    Read the article

  • Large Table in iFrame crashes IE8

    - by Brian
    I have a page with an iFrame whose source is an ashx page. The handler takes in 3 arguments through the query string and generates a text/html response containing a table. When the table gets 1700 rows it crashes the IE8 browser. The browser freezes and returns a null reference error. If I take the html that is being rendered and place it inside a DIV on the page it renders fine in IE8. Any suggestions?

    Read the article

  • Looking for nice Javascript/jQuery code for displaying large tables

    - by misha-moroshko
    I have an HTML table which may contain thousands of rows (number of columns is not a problem here). I would like to be able to browse this table easily and be able to do the following: Decide how many rows will be presented Jump to the next/previous X number of rows Scroll the table using the scroll bars to any desired line Be able to customize/extend easily this Javascript/jQuery code Has anyone seen something similar ? Thank you very much !

    Read the article

  • how to handle large dataset like sproutcore

    - by Nik
    Hello all, I really don't have any substantial code to show here, actually, that's kinda why I am writing: I looked at the SproutCore demo, especially the Collection demo, on http://demo.sproutcore.com/sample_controls/, and am amazed by its loading 200,000 records to the page so easily. I tried using Rails to provide 200,000 records and in a completely blank HTML page with <% @projects.each do |p| % <%= p.title % <% end % that freezes the browser for seconds on my m1530 laptop with 4gb ram and t7700 256gb ssd. Yet the sproutcore demo does not freeze and takes less than 3 seconds to load. What do you think the one technique they are using to enable this is? Thanks!

    Read the article

  • Locking DB w/ Large Reads (Ruby-on-Rails/Heroku)

    - by Splashlin
    Currently I have a Web API running on Heroku that is constantly writing information we're collecting from other data sources (currently theres about half a GB of data and it's growing very quickly). We're looking to add a reporting system on top of the current database that we can use to extract useful information out of the DB. The problem is that when we're running reports we're locking the DB and any other sites communicating with the DB are timing out. Does anyone have any solutions on how to solve this type of issue? Amazon RDS seems to have some interesting stuff with database replication but I don't know if that will solve my problems. Any advice would be greatly appreciated. Thanks

    Read the article

  • Using a large list of terms, search through page text and replace words with links

    - by dunc
    A while ago I posted this question asking if it's possible to convert text to HTML links if they match a list of terms from my database. I have a fairly huge list of terms - around 6000. The accepted answer on that question was superb, but having never used XPath, I was at a loss when problems started occurring. At one point, after fiddling with code, I somehow managed to add over 40,000 random characters to our database - the majority of which required manual removal. Since then I've lost faith in that idea and the more simple PHP solutions simply weren't efficient enough to deal with the amount of data and the quantity of terms. My next attempt at a solution is to write a JS script which, once the page has loaded, retrieves the terms and matches them against the text on a page. This answer has an idea which I'd like to attempt. I would use AJAX to retrieve the terms from the database, to build an object such as this: var words = [ { word: 'Something', link: 'http://www.something.com' }, { word: 'Something Else', link: 'http://www.something.com/else' } ]; When the object has been built, I'd use this kind of code: //for each array element $.each(words, function() { //store it ("this" is gonna become the dom element in the next function) var search = this; $('.message').each( function() { //if it's exactly the same if ($(this).text() === search.word) { //do your magic tricks $(this).html('<a href="' + search.link + '">' + search.link + '</a>'); } } ); } ); Now, at first sight, there is a major issue here: with 6,000 terms, will this code be in any way efficient enough to do what I'm trying to do?. One option would possibly be to perform some of the overhead within the PHP script that the AJAX communicates with. For instance, I could send the ID of the post and then the PHP script could use SQL statements to retrieve all of the information from the post and match it against all 6,000 terms.. then the return call to the JavaScript could simply be the matching terms, which would significantly reduce the number of matches the above jQuery would make (around 50 at most). I have no problem with the script taking a few seconds to "load" on the user's browser, as long as it isn't impacting their CPU usage or anything like that. So, two questions in one: Can I make this work? What steps can I take to make it as efficient as possible? Thanks in advance,

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >