Search Results

Search found 65999 results on 2640 pages for 'large data volumes'.

Page 172/2640 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Getting the Item Count of a large sharepoint list in fastest way

    - by sooraj
    I am trying to get the count of the items in a sharepoint document library programatically. The scale I am working with is 30-70000 items. We have usercontrol in a smartpart to display the count . Ours is a TEAM site. This is the code to get the total count SPList VoulnterrList = web.Lists[ListTitle]; SPQuery query = new SPQuery(); query.ViewAttributes = "Scope=\"Recursive\""; string queries = "<Where><Eq><FieldRef Name='ApprovalStatus' /><Value Type='Choice'>Pending</Value></Eq></Where>"; query.Query = queries; SPListItemCollection lstitemcollAssoID = VoulnterrList.GetItems(query); lblCount.Text = "Total Proofs: " + VoulnterrList.Items.Count.ToString() + " Pending Proofs: " + lstitemcollAssoID.Count.ToString(); The problem is this has serious performance issue it takes 75 to 80 sec to load the page. if we comment this page load will decrees to 4 sec. Any better approch for this problem Ours is sharepoint 2007

    Read the article

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

  • Selecting from a Large Table SQL 2005

    - by Eugene
    I have a SQL table it has more than 1000000 rows, and I need to select with the query as you can see below: SELECT DISTINCT TOP (200) COUNT(1) AS COUNT, KEYWORD FROM QUERIES WITH(NOLOCK) WHERE KEYWORD LIKE '%Something%' GROUP BY KEYWORD ORDER BY 'COUNT' DESC Could you please tell me how can I optimize it to speed up the execution process? Thank you for useful answers.

    Read the article

  • Breaking up large DataGridView for printing

    - by Hal
    Hey, I've got a single-row, 40 column-long DataGridView that i need to print. Since i can neither print it directly (because A4 sheets won't cut it ;)) nor adjust its width to the width of the page itself (because the headers look terrible), i wanted to break the DataGridView to 4 separate pieces and display 10 columns per row (imagine: column 1 to 10 in the first line, column 11 to 21 four or five lines below, etc...). Is there an easy way to do this? I was leaning towards a more manual approach (using fors), but i'd love to know if there's a more elegant way. Cheers

    Read the article

  • Downloading Large JSON File to local file using Java

    - by user1279675
    I'm attempting to download a JSON from the following URL - http://api.crunchbase.com/v/1/companies.js - to a local file. I'm using Java 1.7 and the following JSON Libraries - http://www.json.org/java/ - to attempt to make it work. Here's my code: public static void download(String address, String localFileName) { OutputStream out = null; URLConnection conn = null; InputStream in = null; try { URL url = new URL(address); out = new BufferedOutputStream( new FileOutputStream(localFileName)); conn = url.openConnection(); in = conn.getInputStream(); byte[] buffer = new byte[1024]; int numRead; long numWritten = 0; while ((numRead = in.read(buffer)) != -1) { out.write(buffer, 0, numRead); numWritten += numRead; System.out.println(buffer.length); System.out.println(" " + buffer.hashCode()); } System.out.println(localFileName + "\t" + numWritten); } catch (Exception exception) { exception.printStackTrace(); } finally { try { if (in != null) { in.close(); } if (out != null) { out.close(); } } catch (IOException ioe) { } } } When I run the code everything seems to work until midway through the loop the program seems to stop and not continue reading the JSON Object. Does anyone know why this would stop reading? How could I fix the issue?

    Read the article

  • Linking to a Large address aware DLL.

    - by Canopus
    Suppose I have a DLL which is built with LARGEADDRESSAWARE linker flag set. Now I have an application dynamically linking to this DLL. Does this make my application LARGEADDRESSAWARE? If not then, does it make sense to have this flag set for any DLL?

    Read the article

  • Which kind of changes can't I do with lightweight migration in Core Data?

    - by dontWatchMyProfile
    I recently tried a lot of different stuff with lightweight migration. These all work: 1) Rename attributes (with renaming identifier specified) 2) Add attributes 3) Add new entity + new attribute + inverse relationship to an already existing entity 4) remove existing entity + relationships to that entity = It almost looks like just about anything can be handled with LM. Did I miss something? In which cases am I getting into trouble and need an some more complex approach?

    Read the article

  • Most performant way to check how many objects are referenced by an to-many relationship in Core Data

    - by dontWatchMyProfile
    Lets say I have an employees relationship in an Company entity, and it's to-many. And they're really many. Apple in 100 years, with 1.258.500.073 employees. Could I simply do something like NSInteger numEmployees = [apple.employees count]; without firing 1.258.500.073 faults? (Well, in 100 years, the iPhone will easily handle so many objects, for sure...but anyways)

    Read the article

  • Prevent RegEx Hang on Large Matches...

    - by developerjay
    This is a great regular expression for dates... However it hangs indefinitely on this one page I tried... I wanted to try this page ( http://pleac.sourceforge.net/pleac%5Fpython/datesandtimes.html ) for the fact that it does have lots of dates on it and I want to grab all of them. I don't understand why it is hanging when it doesn't on other pages... Why is my regexp hanging and/or how could I clean it up to make it better/efficient ? Python Code: monthnames = "(?:Jan\w*|Feb\w*|Mar\w*|Apr\w*|May|Jun\w?|Jul\w?|Aug\w*|Sep\w*|Oct\w*|Nov(?:ember)?|Dec\w*)" pattern1 = re.compile(r"(\d{1,4}[\/\\\-]+\d{1,2}[\/\\\-]+\d{2,4})") pattern4 = re.compile(r"(?:[\d]*[\,\.\ \-]+)*%s(?:[\,\.\ \-]+[\d]+[stndrh]*)+[:\d]*[\ ]?(PM)?(AM)?([\ \-\+\d]{4,7}|[UTCESTGMT\ ]{2,4})*"%monthnames, re.I) patterns = [pattern4, pattern1] for pattern in patterns: print re.findall(pattern, s) btw... when i say im trying it against this site.. I'm trying it against the webpage source.

    Read the article

  • Declaring two large 2d arrays gives segmentation fault.

    - by pfdevil
    Hello, i'm trying to declare and allocate memory for two 2d-arrays. However when trying to assign values to itemFeatureQ[39][16816] I get a segmentation vault. I can't understand it since I have 2GB of RAM and only using 19MB on the heap. Here is the code; double** reserveMemory(int rows, int columns) { double **array; int i; array = (double**) malloc(rows * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } for(i = 0; i < rows; i++) { array[i] = (double*) malloc(columns * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } } return array; } void populateUserFeatureP(double **userFeatureP) { int x,y; for(x = 0; x < CUSTOMERS; x++) { for(y = 0; y < FEATURES; y++) { userFeatureP[x][y] = 0; } } } void populateItemFeatureQ(double **itemFeatureQ) { int x,y; for(x = 0; x < FEATURES; x++) { for(y = 0; y < MOVIES; y++) { printf("(%d,%d)\n", x, y); itemFeatureQ[x][y] = 0; } } } int main(int argc, char *argv[]){ double **userFeatureP = reserveMemory(480189, 40); double **itemFeatureQ = reserveMemory(40, 17770); populateItemFeatureQ(itemFeatureQ); populateUserFeatureP(userFeatureP); return 0; }

    Read the article

  • Sending large XML from Silverlight to SVC (WCF)

    - by alexbf
    Hi! I want to send a big XML string to a WCF SVC service from Silverlight. It looks like anything under about 50k is sent correctly but if I try to send something over that limit, my request reaches the server (BeginRequest is called) but never reaches my SVC. I get the classic "NotFound" exception. Any idea on how to raise that limit? If I can't raise it? What are my other options? Thanks, Alex

    Read the article

  • how can i execute large mysql queries fast

    - by testkhan
    I have 4 mysql tables and have a single query with JOIN on multiple tables and I am requesting it via jquery ajax, but it takes to much long time from about 1-3 minutes while I want to execute them on average 2-5 seconds. is there any special way to execute the quries fast

    Read the article

  • Empty R environment becomes large file when saved

    - by user1052019
    I'm getting behaviour I don't understand when saving environments. The code below demonstrates the problem. I would have expected the two files (far-too-big.RData, and right-size.RData) to be the same size, and also very small because the environments they contain are empty. In fact, far-too-big.R ends up the same size as bigfile.RData. I get the same results using 2.14.1 and 2.15.2, both on WinXP 5.1 SP3. Can anyone explain why this is happening? Thanks. a <- matrix(runif(1000000, 0, 1), ncol=1000) save(a, file="c:/temp/bigfile.RData") test <- function() { load("c:/temp/bigfile.RData") test <- new.env() save(test, file="c:/temp/far-too-big.RData") test1 <- new.env(parent=globalenv()) save(test1, file="c:/temp/right-size.RData") } test()

    Read the article

  • Read a large result set in chunks from mysql

    - by ripper234
    I am trying to read a huge result set from mysql. Reading them in a straight-forward manner didn't work, as mysql tries to return all results together, which times out. I found the following piece of code which tells mysql to read the results back one at a time: stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY); stmt.setFetchSize(Integer.MIN_VALUE); Can I read a chunk at a time instead of one by one? I've tried setting fetch size to a different value, but it doesn't work.

    Read the article

  • Best way to support large dropdowns

    - by JustAProgrammer
    Say I have a report that can be restricted by specifying some value in a dropdown. This dropdown list references a table with 30,000 records. I don't think this would be feasible to populate a dropdown with! So, what is the best way to provide the user the ability to select a value given this situation? These values do not really have categories and even if I subdivided (by having some nesting dropdown situation) by the first letter of the value, that may still leave a few thousand entries. What's the best way to deal with this?

    Read the article

  • sql select from a large number of IDs

    - by Claudiu
    I have a table, Foo. I run a query on Foo to get the ids from a subset of Foo. I then want to run a more complicated set of queries, but only on those IDs. Is there an efficient way to do this? The best I can think of is creating a query such as: SELECT ... --complicated stuff WHERE ... --more stuff AND id IN (1, 2, 3, 9, 413, 4324, ..., 939393) That is, I construct a huge "IN" clause. Is this efficient? Is there a more efficient way of doing this, or is the only way to JOIN with the inital query that gets the IDs? If it helps, I'm using SQLObject to connect to a PostgreSQL database, and I have access to the cursor that executed the query to get all the IDs.

    Read the article

  • servlet ArrayList and HashMap problem witch result

    - by nonameplum
    Hi, I have that code List<Map<String, Object>> data = new ArrayList<Map<String, Object>>(); Map<String, Object> item = new HashMap<String, Object>(); data.clear(); item.clear(); int i = 0; while (i < 5){    item.put("id", i);    i++;    out.println("id: " + item.get("id"));    out.println("--------------------------");    data.add(item); } for(i=0 ; i<5 ; i++){    out.println("print data[" + i + "]" + data.get(i)); } Result of that is: id: 0 -------------------------- id: 1 -------------------------- id: 2 -------------------------- id: 3 -------------------------- id: 4 -------------------------- print data[0]{id=4} print data[1]{id=4} print data[2]{id=4} print data[3]{id=4} print data[4]{id=4} Why only last element is stored?

    Read the article

  • VS2008 is very slow on a specific large C++ solution

    - by VioletRose
    I have a solution with 21 C++ projects and 1 VB.NET project. The IDE responds very slowly when I simply move the carret in a file or try to open the menu. The process seems to take 50% of CPU for each movement. It only happens with this solution and only on my machine. The solution has total of 2380 source and header files, of which 1280 are header files. I tried to remove all connection to the source control (Perforce) but it didn't help. Also, I have Visual Assist installed but even after removing it (uninstall), the same behavior continued. Any idea?

    Read the article

  • Rendering CALayer in context uses large amounts of memory

    - by Otium
    I am taking a snapshot of a UIWebView layer, but when I render the webview's layer in the current context my app uses 10mb more memory, and I don't think that should be right. Here is my current code: CGSize imageSize = self.bounds.size; UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0); CGContextRef context = UIGraphicsGetCurrentContext(); [self.layer renderInContext:context]; _snapshot = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext();

    Read the article

  • NSPredicate (Core Data fetch) to filter on an attribute value being present in a supplied set (list)

    - by starbaseweb
    I'm trying to create a fetch predicate that is the analog to the SQL "IN" statement, and the syntax to do so with NSPredicate escapes me. Here's what I have so far (the relevant excerpt from my fetching routine): NSFetchRequest *request = [[[NSFetchRequest alloc] init] autorelease]; NSEntityDescription *entity = [NSEntityDescription entityForName: @"BodyPartCategory" inManagedObjectContext:_context]; [request setEntity:entity]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"(name IN %@)", [RPBodyPartCategory defaultBodyPartCategoryNames]]; [request setPredicate:predicate]; The entity "BodyPartCategory" has a string attribute "name". I have a list of names (just NSString objects) in an NSArray as returned by: [RPBodyPartCategory defaultBodyPartCategoryNames] So let's say that array has string such as {@"Liver", @"Kidney", @"Thyroid"} ... etc. I want to fetch all 'BodyPartCategory' instances whose name attribute matches one of the strings in the set provided (technically NSArray but I can make it an NSSet). In SQL, this would be something like: SELECT * FROM BodyPartCategories WHERE name IN ('Liver', 'Kidney', 'Thyroid') I've gone through various portions of the Predicate Programming Guide, but I don't see this simple use case covered. Pointers/help much appreciated!

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >