Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 44/457 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Enumerating large (20-digit) [probable] prime numbers

    - by Paul Baker
    Given A, on the order of 10^20, I'd like to quickly obtain a list of the first few prime numbers greater than A. OK, my needs aren't quite that exact - it's alright if occasionally a composite number ends up on the list. What's the fastest way to enumerate the (probable) primes greater than A? Is there a quicker way than stepping through all of the integers greater than A (other than obvious multiples of say, 2 and 3) and performing a primality test for each of them? If not, and the only method is to test each integer, what primality test should I be using?

    Read the article

  • sending large data .getJSON or proxy ?

    - by numerical25
    Hey guys. I was told that the only trick to sending data to a external server (i.e x-domain) is to use getJSON. Well my problem is that the data I am sending exceeds the getJSON data limit. I am tracking mouse movements on a screen for analytics. Another option is I could also send a little data at a time. probably every time the mouse moves. but that seems as if it would slow things down. I could setup a proxy server. My question is which would be better? Setting up a proxy server ? or Just sending bits of information via javascript or JQUERY. What do the professionals use (Google and other company's that build mash-ups that send a lot of data to x-domain sites.) I need to know the best practices. Thanx!! Also the data is put into JSON.

    Read the article

  • Serving large generated files using Google App Engine?

    - by John Carter
    Hiya, Presently I have a GAE app that does some offline processing (backs up a user's data), and generates a file that's somewhere in the neighbourhood of 10 - 100 MB. I'm not sure of the best way to serve this file to the user. The two options I'm considering are: Adding some code to the offline processing code that 'spoofs' it as a form upload to the blob store, and going thru the normal blobstore process to serve the file. Having the offline processing code store the file somewhere off of GAE, and serving it from there. Is there a much better approach I'm overlooking? I'm guessing this is functionality that isn't well suited to GAE. I had thought of storing in the datastore as db.Text or Dd.Blob but there I encounter the 1 MB limit. Any input would be appreciated,

    Read the article

  • Browsers (IE and Firefox) freeze when copying large amount of text

    - by Matt
    I have a web application - a Java servlet - that delivers data to users in the form of a text printout in a browser (text marked up with HTML in order to display in the browser as we want it to). The text does display in different colors, though most of it is black. One typical mode of operation is this: 1. User submits a form to request data. 2. Servlet delivers HTML file to browser. 3. User does CTRL+A to select all the text. 4. User does CTRL+C to copy all the text. 5. User goes to a text editor and does CTRL+V to paste the text. In the testing where I'm having this problem, step #2 successfully loads all the data - we wait for that to complete. We can scroll down to the end of what the browser loaded and see the end of the data. However, the browser freezes on step #3 (Firefox) or on step #4 (IE). Because step #2 finishes, I think it is a browser/memory issue, and not an issue with the web application. If I run queries to deliver smaller amounts of data (but after several queries we get the same data we would have above in one query) and copy/paste this text, the file I save it into ends up being about 8 MB. If I save the browser's displayed HTML to a file on my computer via File-Save As from the browser menu, it works fine and the file is about 22 MB. We've tried this on 2 different computers at work (both running Windows XP, with at least 2 GB of RAM and many GB of free disk space), using Firefox and IE. We also tried it on a home computer from a home network outside of work (thinking it might be our IT security software causing the problem), running Windows 7 using IE, and still had the problem. When I've done this, I can see whatever browser I'm using utilizing the CPU at 50%. Firefox's memory usage grows to about 1 GB; IE's stays in the several hundred MBs. We once let this run for half an hour, and it did not complete. I'm most likely going to modify the web app to have an option of delivering a plain text file for download, and I imagine that will get the users what they need. But for the mean time, and because I'm curious - and I don't like my application freezing people's browsers, does anyone have any ideas about the browser freezing? I understand that sometimes you just reach your memory limit, but 22 MB sounds to me like an amount I should be able to copy to the clipboard.

    Read the article

  • How to manage large amounts of delegates and usercallbacks in C# async http library

    - by Tyler
    I'm coding a .NET library in C# for communicating with XBMC via its JSON RPC interface using HTTP. I coded and released a preliminary version but everything is done synchronously. I then recoded the library to be asynchronous for my own purposes as I was/am building an XBMC remote for WP7. I now want to release the new async library but want to make sure it's nice and tidy before I do. Due to the async nature a user initiates a request, supplies a callback method that matches my delegate and then handles the response once it's been received. The problem I have is that within the library I track a RequestState object for the lifetime of the request, it contains the http request/response as well as the user callback etc. as member variables, this would be fine if only one type of object was coming back but depending on what the user calls they may be returned a list of songs or a list of movies etc. My implementation at the moment uses a single delegate ResponseDataRecieved which has a single parameter which is a simple Object - As this has only be used by me I know which methods return what and when I handle the response I cast said object to the type I know it really is - List, List etc. A third party shouldn't have to do this though - The delegate signature should contain the correct type of object. So then I need a delegate for every type of response data that can be returned to the third party - The specific problem is, how do I handle this gracefully internally - Do I have a bunch of different RequestState objects that each have a different member variable for the different delegates? That doesn't "feel" right. I just don't know how to do this gracefully and cleanly.

    Read the article

  • Problems opening large csv file

    - by John Tyler
    I have a csv file that is 100mb in size. I need to parse some data out of it into a new format. I tried PHP, but keep running into memory issues. After around the first 150 "rows" or so, the script poops out. This is even on the localhost, and doing everything I can to tune the PHP settings, including max_memory and script_execution_time. Now before I continue, I'd like to know if Python will poop out on me too. Or if I will have to use C++. Can someone name good csv libraries for for these programmin langueage? The file is quoted csv. I mean scheiza I can't even open this text file in OpenOffice without it dying on me. (then again, Java sux as bad as PHP)

    Read the article

  • Best way to laod a large file into arraylist in java

    - by user1730833
    I have a file whose size is about 300mb i want to read the contents line by line and then add it into arraylist. So i have made an object of array list a1 , then reading the file using bufferedreader , after that when i add the lines from file into arraylist it gives an error Exception in thread "main" java.lang.OutOfMemoryError: Java heap space. Please tell me what should be the solution for this.

    Read the article

  • How to optimize indexing of large number of DB records using Zend_Lucene and Zend_Paginator

    - by jdichev
    So I have this cron script that is deployed and ran using Cron on a host and indexes all the records in a database table - the index is later used both for the front end of the site and the backed operations as well. After the operation, the index is about 3-4 MB. The problem is it takes a lot of resources (CPU: 30+ and a good chunk of memory) and slows the machine down. My question is about how to optimize the operation described below: First there is a select query built using the Zend Framework API, this query is then passed to a Paginator factory that returns a paginator which I am using to balance the current number of items being indexed and not iterate over too much items. The script is iterating over the current items in the paginator object using a foreach loop until reaching the end and then it starts from the beginning after getting items for the next page. I am suspecting this overhead is caused by the Zend_Lucene but no idea how this could be improved.

    Read the article

  • deleting a large number of rows from a table

    - by Azeem
    We have a requirement to delete rows in the order of millions from multiple tables as a batch job (note that we are not deleting all the rows, we are deleting based on a timestamp stored in an indexed column). Obviously a normal DELETE takes forever (because of logging, referential constraint checking etc.). I know in the LUW world, we have ALTER TABLE NOT LOGGED INITIALLY but I can't seem to find the an equivalent SQL statement for DB2 v8 z/OS. Any one has any ideas on how to do this really fast? Also, any ideas on how to avoid the referential checks when deleting the rows? Please let me know.

    Read the article

  • Representing a very large array of bits in little memory

    - by user614624
    Hello, I would like to represent a structure containing 250 M states(1 bit each) somehow into as less memory as possible (100 k maximum). The operations on it are set/get. I cold not say that it's dense or sparse, it may vary. The language I want to use is C. I looked at other threads here to find something suitable also. A probabilistic structure like Bloom filter for example would not fit because of the possible false answers. Any suggestions please?

    Read the article

  • firefox does not load large size images

    - by Pradeep
    I am stuck with a kind of bug in FF, wherein it’s unable to load images of big size (I have 8 MB size of image) from the server. The loading of image is all fine on IE. I am still looking out for ways to get rid of this problem. I changed server(IIS) settings to allow bigger file sizes. Also, I used “load” event on image using JQuery and tried all sort of options listed here http://api.jquery.com/load-event/, but nothing worked so far. If anyone of you has come across any such similar problem, and a way to resolve it, it would be nice to hear from you Please note: high resolution images are part of the requirement. Code : <style> img { background-color: #FFFFFF; background-image: url(http://eremurus.hyd:8080/QMS/plugin/imagepanner/loader.gif); background-repeat: no-repeat; background-position: center center; } </style> <script src="../plugin/jquery-ui-1.8.7.custom/js/jquery-1.4.4.min.js" type="text/javascript"></script> <script> jQuery(document).ready(function($){ ///var _url = "http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"; // set up the node / element _im =$("#main"); //_im.bind("load",function(){ $(this).fadeIn(); }); // set the src attribute now, after insertion to the DOM //_im.attr('src',_url); $("#main").one("load",function(){ alert('loaded'); }) .each(function(){ if(this.complete){ $(this).trigger("load"); } }); }); </script> </head> <body> <div id="target"><img id='main' src="http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"> </img></div> </body> </html>

    Read the article

  • Large Table in iFrame crashes IE8

    - by Brian
    I have a page with an iFrame whose source is an ashx page. The handler takes in 3 arguments through the query string and generates a text/html response containing a table. When the table gets 1700 rows it crashes the IE8 browser. The browser freezes and returns a null reference error. If I take the html that is being rendered and place it inside a DIV on the page it renders fine in IE8. Any suggestions?

    Read the article

  • Locking DB w/ Large Reads (Ruby-on-Rails/Heroku)

    - by Splashlin
    Currently I have a Web API running on Heroku that is constantly writing information we're collecting from other data sources (currently theres about half a GB of data and it's growing very quickly). We're looking to add a reporting system on top of the current database that we can use to extract useful information out of the DB. The problem is that when we're running reports we're locking the DB and any other sites communicating with the DB are timing out. Does anyone have any solutions on how to solve this type of issue? Amazon RDS seems to have some interesting stuff with database replication but I don't know if that will solve my problems. Any advice would be greatly appreciated. Thanks

    Read the article

  • Looking for nice Javascript/jQuery code for displaying large tables

    - by misha-moroshko
    I have an HTML table which may contain thousands of rows (number of columns is not a problem here). I would like to be able to browse this table easily and be able to do the following: Decide how many rows will be presented Jump to the next/previous X number of rows Scroll the table using the scroll bars to any desired line Be able to customize/extend easily this Javascript/jQuery code Has anyone seen something similar ? Thank you very much !

    Read the article

  • how to handle large dataset like sproutcore

    - by Nik
    Hello all, I really don't have any substantial code to show here, actually, that's kinda why I am writing: I looked at the SproutCore demo, especially the Collection demo, on http://demo.sproutcore.com/sample_controls/, and am amazed by its loading 200,000 records to the page so easily. I tried using Rails to provide 200,000 records and in a completely blank HTML page with <% @projects.each do |p| % <%= p.title % <% end % that freezes the browser for seconds on my m1530 laptop with 4gb ram and t7700 256gb ssd. Yet the sproutcore demo does not freeze and takes less than 3 seconds to load. What do you think the one technique they are using to enable this is? Thanks!

    Read the article

  • Using a large list of terms, search through page text and replace words with links

    - by dunc
    A while ago I posted this question asking if it's possible to convert text to HTML links if they match a list of terms from my database. I have a fairly huge list of terms - around 6000. The accepted answer on that question was superb, but having never used XPath, I was at a loss when problems started occurring. At one point, after fiddling with code, I somehow managed to add over 40,000 random characters to our database - the majority of which required manual removal. Since then I've lost faith in that idea and the more simple PHP solutions simply weren't efficient enough to deal with the amount of data and the quantity of terms. My next attempt at a solution is to write a JS script which, once the page has loaded, retrieves the terms and matches them against the text on a page. This answer has an idea which I'd like to attempt. I would use AJAX to retrieve the terms from the database, to build an object such as this: var words = [ { word: 'Something', link: 'http://www.something.com' }, { word: 'Something Else', link: 'http://www.something.com/else' } ]; When the object has been built, I'd use this kind of code: //for each array element $.each(words, function() { //store it ("this" is gonna become the dom element in the next function) var search = this; $('.message').each( function() { //if it's exactly the same if ($(this).text() === search.word) { //do your magic tricks $(this).html('<a href="' + search.link + '">' + search.link + '</a>'); } } ); } ); Now, at first sight, there is a major issue here: with 6,000 terms, will this code be in any way efficient enough to do what I'm trying to do?. One option would possibly be to perform some of the overhead within the PHP script that the AJAX communicates with. For instance, I could send the ID of the post and then the PHP script could use SQL statements to retrieve all of the information from the post and match it against all 6,000 terms.. then the return call to the JavaScript could simply be the matching terms, which would significantly reduce the number of matches the above jQuery would make (around 50 at most). I have no problem with the script taking a few seconds to "load" on the user's browser, as long as it isn't impacting their CPU usage or anything like that. So, two questions in one: Can I make this work? What steps can I take to make it as efficient as possible? Thanks in advance,

    Read the article

  • Reducing lag when downloading large amount of data from webpage

    - by Mahir
    I am getting data via RSS feeds and displaying each article in a table view cell. Each cell has an image view, set to a default image. If the page has an image, the image is to be replaced with the image from the article. As of now, each cell downloads the source code from the web page, causing the app to lag when I push the view controller and when I try scrolling. Here is what I have in the cellForRowAtIndexPath: method. NSString * storyLink = [[stories objectAtIndex: storyIndex] objectForKey: @"link"]; storyLink = [storyLink stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; NSString *sourceCode = [NSString stringWithContentsOfURL:[NSURL URLWithString:storyLink] encoding:NSUTF8StringEncoding error:&error]; NSString *startPt = @"instant-gallery"; NSString *startPt2 = @"<img src=\""; if ([sourceCode rangeOfString:startPt].length != 0) { //webpage has images // find the first "<img src=...>" tag starting from "instant-gallery" NSString *trimmedSource = [sourceCode substringFromIndex:NSMaxRange([sourceCode rangeOfString:startPt])]; trimmedSource = [trimmedSource substringFromIndex:NSMaxRange([trimmedSource rangeOfString:startPt2])]; trimmedSource = [trimmedSource substringToIndex:[trimmedSource rangeOfString:@"\""].location]; NSURL *url = [NSURL URLWithString:trimmedSource]; NSData *data = [NSData dataWithContentsOfURL:url]; UIImage *image = [UIImage imageWithData:data]; cell.picture.image = image; Someone suggested using NSOperationQueue. Would this way be a good solution? EDIT: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *MyIdentifier = @"FeedCell"; LMU_LAL_FeedCell *cell = [tableView dequeueReusableCellWithIdentifier:MyIdentifier]; if (cell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"LMU_LAL_FeedCell" owner:self options:nil]; cell = (LMU_LAL_FeedCell*) [nib objectAtIndex:0]; } int storyIndex = [indexPath indexAtPosition: [indexPath length] - 1]; NSString *untrimmedTitle = [[stories objectAtIndex: storyIndex] objectForKey: @"title"]; cell.title.text = [untrimmedTitle stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceCharacterSet]]; CGSize maximumLabelSize = CGSizeMake(205,9999); CGSize expectedLabelSize = [cell.title.text sizeWithFont:cell.title.font constrainedToSize:maximumLabelSize]; //adjust the label to the the new height. CGRect newFrame = cell.title.frame; newFrame.size.height = expectedLabelSize.height; cell.title.frame = newFrame; //position frame of date label CGRect dateNewFrame = cell.date.frame; dateNewFrame.origin.y = cell.title.frame.origin.y + cell.title.frame.size.height + 1; cell.date.frame = dateNewFrame; cell.date.text = [self formatDateAtIndex:storyIndex]; dispatch_queue_t someQueue = dispatch_queue_create("cell background queue", NULL); dispatch_async(someQueue, ^(void){ NSError *error = nil; NSString * storyLink = [[stories objectAtIndex: storyIndex] objectForKey: @"link"]; storyLink = [storyLink stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; NSString *sourceCode = [NSString stringWithContentsOfURL:[NSURL URLWithString:storyLink] encoding:NSUTF8StringEncoding error:&error]; NSString *startPt = @"instant-gallery"; NSString *startPt2 = @"<img src=\""; if ([sourceCode rangeOfString:startPt].length != 0) { //webpage has images // find the first "<img src=...>" tag starting from "instant-gallery" NSString *trimmedSource = [sourceCode substringFromIndex:NSMaxRange([sourceCode rangeOfString:startPt])]; trimmedSource = [trimmedSource substringFromIndex:NSMaxRange([trimmedSource rangeOfString:startPt2])]; trimmedSource = [trimmedSource substringToIndex:[trimmedSource rangeOfString:@"\""].location]; NSURL *url = [NSURL URLWithString:trimmedSource]; NSData *data = [NSData dataWithContentsOfURL:url]; UIImage *image = [UIImage imageWithData:data]; dispatch_async(dispatch_get_main_queue(), ^(void){ cell.picture.image = image; }); }) //error: expected expression } return cell; //error: expected identifier } //error extraneous closing brace

    Read the article

  • Ideal way/architecture to deliver large data over Web Services

    - by zengr
    We are trying to design 6 web services, which will serve another client component. The client component requires data from the web service we are implementing. Now, the problem is, there is not 1 WS we are implementing, there is one WS which the client component hits, this initiates a series (5 more) of WSs which gather data from their respective data stores and finally provide the data back to the original WS, which then delivers the data back to the client component. So, if the requested data becomes huge, then, this will be a serious problem for our internal communication channel. So, what do you guys suggest? What can be done to avoid overloading of the communication channel between the internal WS and at the same time, also delivering the data to the client component.

    Read the article

  • Java combine parents of two large inheritance chains

    - by Soylent Green
    I have two parent classes in a huge project, let's say ClassA and ClassB. Each class has many subclasses, which in turn have many subclasses, which in turn have many subclasses, etc. My task is to "marry" these two "families" so that both inherit from a SINGLE parent. I need to essentially make ClassA and ClassB one class (parent) to both of their combined subclasses (children). ClassA and ClassB both currently implement Serializable. I am currently trying to make both inheritance chains inherit from ClassA, and then copy all functions and data members from ClassB into ClassA. This is tedious, and I think a terrible solution. What would be the CORRECT way to solve this problem?

    Read the article

  • Need advice on cron job'ing a very large process

    - by Arms
    I have a PHP script that grabs data from an external service and saves data to my database. I need this script to run once every minute for every user in the system (of which I expect to be thousands). My question is, what's the most efficient way to run this per user, per minute? At first I thought I would have a function that grabs all the user Ids from my database, iterate over the ids and perform the task for each one, but I think that as the number of users grow, this will take longer, and no longer fall within 1 minute intervals. Perhaps I should queue the user Ids, and perform the task individually for each one? In which case, I'm actually unsure of how to proceed. Thanks in advance for any advice.

    Read the article

  • Read large amount of data from file in Java

    - by Crozin
    Hello I've got text file that contains 1 000 002 numbers in following formation: 123 456 1 2 3 4 5 6 .... 999999 100000 Now I need to read that data and allocate it to int variables (the very first two numbers) and all the rest (1 000 000 numbers) to an array int[]. It's not a hard task, but - it's horrible slow. My first attempt was java.util.Scanner: Scanner stdin = new Scanner(new File("./path")); int n = stdin.nextInt(); int t = stdin.nextInt(); int array[] = new array[n]; for (int i = 0; i < n; i++) { array[i] = stdin.nextInt(); } It works as excepted but it takes about 7500 ms to execute. I need to fetch that data in up to several hundred of milliseconds. Then I tried java.io.BufferedReader: Using BufferedReader.readLine() and String.split() I got the same results in about 1700 ms, but it's still too many. How can I read that amount of data in less that 1 second? The final result should be equal to: int n = 123; int t = 456; int array[] = { 1, 2, 3, 4, ..., 999999, 100000 };

    Read the article

  • Calculating average (AVG) and grouping by week on large data set takes too long

    - by caioiglesias
    I'm getting average prices by week on 7 million rows, it's taking around 30 seconds to get the job done. This is the simple query: SELECT AVG(price) as price, yearWEEK(FROM_UNIXTIME(timelog)) as week from pricehistory where timelog > $range and product_id = $id GROUP BY week The only week that actually gets data changed and is worth averaging every time is always the last one, so this calculation for the whole period is a waste of resources. I just wanted to know if mysql has a tool to help out on this.

    Read the article

  • Large amount of constants in Java

    - by Lars D
    I need to include about 1 MByte of data in a Java application, for very fast and easy access in the rest of the source code. My main background is not Java, so my initial idea was to convert the data directly to Java source code, defining 1MByte of constant arrays, classes (instead of C++ struct) etc., something like this: public final/immutable/const MyClass MyList[] = { { 23012, 22, "Hamburger"} , { 28375, 123, "Kieler"} }; However, it seems that Java does not support such constructs. Is this correct? If yes, what is the best solution to this problem?

    Read the article

  • Selecting from a Large Table SQL 2005

    - by Eugene
    I have a SQL table it has more than 1000000 rows, and I need to select with the query as you can see below: SELECT DISTINCT TOP (200) COUNT(1) AS COUNT, KEYWORD FROM QUERIES WITH(NOLOCK) WHERE KEYWORD LIKE '%Something%' GROUP BY KEYWORD ORDER BY 'COUNT' DESC Could you please tell me how can I optimize it to speed up the execution process? Thank you for useful answers.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >