Search Results

Search found 29949 results on 1198 pages for 'large scale website'.

Page 149/1198 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Reducing lag when downloading large amount of data from webpage

    - by Mahir
    I am getting data via RSS feeds and displaying each article in a table view cell. Each cell has an image view, set to a default image. If the page has an image, the image is to be replaced with the image from the article. As of now, each cell downloads the source code from the web page, causing the app to lag when I push the view controller and when I try scrolling. Here is what I have in the cellForRowAtIndexPath: method. NSString * storyLink = [[stories objectAtIndex: storyIndex] objectForKey: @"link"]; storyLink = [storyLink stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; NSString *sourceCode = [NSString stringWithContentsOfURL:[NSURL URLWithString:storyLink] encoding:NSUTF8StringEncoding error:&error]; NSString *startPt = @"instant-gallery"; NSString *startPt2 = @"<img src=\""; if ([sourceCode rangeOfString:startPt].length != 0) { //webpage has images // find the first "<img src=...>" tag starting from "instant-gallery" NSString *trimmedSource = [sourceCode substringFromIndex:NSMaxRange([sourceCode rangeOfString:startPt])]; trimmedSource = [trimmedSource substringFromIndex:NSMaxRange([trimmedSource rangeOfString:startPt2])]; trimmedSource = [trimmedSource substringToIndex:[trimmedSource rangeOfString:@"\""].location]; NSURL *url = [NSURL URLWithString:trimmedSource]; NSData *data = [NSData dataWithContentsOfURL:url]; UIImage *image = [UIImage imageWithData:data]; cell.picture.image = image; Someone suggested using NSOperationQueue. Would this way be a good solution? EDIT: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *MyIdentifier = @"FeedCell"; LMU_LAL_FeedCell *cell = [tableView dequeueReusableCellWithIdentifier:MyIdentifier]; if (cell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"LMU_LAL_FeedCell" owner:self options:nil]; cell = (LMU_LAL_FeedCell*) [nib objectAtIndex:0]; } int storyIndex = [indexPath indexAtPosition: [indexPath length] - 1]; NSString *untrimmedTitle = [[stories objectAtIndex: storyIndex] objectForKey: @"title"]; cell.title.text = [untrimmedTitle stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceCharacterSet]]; CGSize maximumLabelSize = CGSizeMake(205,9999); CGSize expectedLabelSize = [cell.title.text sizeWithFont:cell.title.font constrainedToSize:maximumLabelSize]; //adjust the label to the the new height. CGRect newFrame = cell.title.frame; newFrame.size.height = expectedLabelSize.height; cell.title.frame = newFrame; //position frame of date label CGRect dateNewFrame = cell.date.frame; dateNewFrame.origin.y = cell.title.frame.origin.y + cell.title.frame.size.height + 1; cell.date.frame = dateNewFrame; cell.date.text = [self formatDateAtIndex:storyIndex]; dispatch_queue_t someQueue = dispatch_queue_create("cell background queue", NULL); dispatch_async(someQueue, ^(void){ NSError *error = nil; NSString * storyLink = [[stories objectAtIndex: storyIndex] objectForKey: @"link"]; storyLink = [storyLink stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; NSString *sourceCode = [NSString stringWithContentsOfURL:[NSURL URLWithString:storyLink] encoding:NSUTF8StringEncoding error:&error]; NSString *startPt = @"instant-gallery"; NSString *startPt2 = @"<img src=\""; if ([sourceCode rangeOfString:startPt].length != 0) { //webpage has images // find the first "<img src=...>" tag starting from "instant-gallery" NSString *trimmedSource = [sourceCode substringFromIndex:NSMaxRange([sourceCode rangeOfString:startPt])]; trimmedSource = [trimmedSource substringFromIndex:NSMaxRange([trimmedSource rangeOfString:startPt2])]; trimmedSource = [trimmedSource substringToIndex:[trimmedSource rangeOfString:@"\""].location]; NSURL *url = [NSURL URLWithString:trimmedSource]; NSData *data = [NSData dataWithContentsOfURL:url]; UIImage *image = [UIImage imageWithData:data]; dispatch_async(dispatch_get_main_queue(), ^(void){ cell.picture.image = image; }); }) //error: expected expression } return cell; //error: expected identifier } //error extraneous closing brace

    Read the article

  • Greasemonkey script, that creates Kineticjs drag and drop canvas over every website

    - by Michael Moeller
    I'd like to put a drag and drop canvas over every website I visit in Firefox. My Greasemonkey script puts a drag and drop canvas under every page: kinetic.user.js: // ==UserScript== // @name kineticjs_example // @description Canvas Drag and Drop // @include * // @require http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js // @require http://d3lp1msu2r81bx.cloudfront.net/kjs/js/lib/kinetic-v4.7.2.min.js // ==/UserScript== var div = document.createElement( 'div' ); with( div ) { setAttribute( 'id', 'container' ); } // append at end document.getElementsByTagName( 'body' )[ 0 ].appendChild( div ); var stage = new Kinetic.Stage({ container: 'container', width: 1000, height: 1000 }); var layer = new Kinetic.Layer(); var rectX = stage.getWidth() / 2 - 50; var rectY = stage.getHeight() / 2 - 25; var box = new Kinetic.Rect({ x: rectX, y: rectY, width: 100, height: 50, fill: '#00D2FF', stroke: 'black', strokeWidth: 4, draggable: true }); // add cursor styling box.on('mouseover', function() { document.body.style.cursor = 'pointer'; }); box.on('mouseout', function() { document.body.style.cursor = 'default'; }); layer.add(box); stage.add(layer); How can I drag and drop this shape over the entire website?

    Read the article

  • Need advice on cron job'ing a very large process

    - by Arms
    I have a PHP script that grabs data from an external service and saves data to my database. I need this script to run once every minute for every user in the system (of which I expect to be thousands). My question is, what's the most efficient way to run this per user, per minute? At first I thought I would have a function that grabs all the user Ids from my database, iterate over the ids and perform the task for each one, but I think that as the number of users grow, this will take longer, and no longer fall within 1 minute intervals. Perhaps I should queue the user Ids, and perform the task individually for each one? In which case, I'm actually unsure of how to proceed. Thanks in advance for any advice.

    Read the article

  • Make backup of large site with 100,000+ files/images

    - by niggles
    I tried backing up our site today using the Unix 'cp' command and ended up getting our office blocked out by PLESK - it added my ip to /etc/hosts.deny as it thought I was flooding the server. After Tech support fixed the issue, they suggested I go folder by folder to back it up, but there's about 10,000 folders on the site totaling 1/2 a terabyte, each with multiple sub-folders, so this isn't viable. Basically I want to be able to mirror the domain on another domain we've got set up on the same dedicated server so I can test with live images (the bulk of our content). Any suggestions e.g adding some rules to open_base_dir and getting PHP to recursively copy the folders to the other domain (remember it's on the same dedicated box so it just needs to traverse the directory, not FTP things).

    Read the article

  • My website keeps crashing IE, can't debug

    - by Ninja rhino
    I have a website that suddenly started to crash internet explorer. The website loads and starts executing javascript but somewhere in there the machinery explodes. I don't even get a script error, it just crashes. I've tried to manually step through every single line of js with the built in debugger but then of course the problem doesn't occur. If i choose to debug the application when it crashes i see the following message. Unhandled exception at 0x6c5dedf5 in iexplore.exe: 0xC0000005: Access violation reading location 0x00000090. The top 5 items in the call stack looks like this VGX.dll!6c5dedf5() [Frames below may be incorrect and/or missing, no symbols loaded for VGX.dll] VGX.dll!6c594d70() VGX.dll!6c594f63() VGX.dll!6c595350() VGX.dll!6c58f5e3() mshtml.dll!6f88dd17() VGX.dll seems to be part of the vml renderer and i am in fact using VML. I'm not suprised because i've had so many problems with vml, attributes has to be set in specific order, sometimes you cant set attributes when you have elements attached to the dom or vice versa (everything undocumented btw) but then the problems can usually be reproduced when debugging but not now :( The problem also occurs in no plugin-mode. Is there a better approach than trial and error to solve this? Edit: Adding a console outputting every suspect modification to the DOM made the problem only occur sometimes. (the console is also implemented in javascript on the same page, i'm able to see the output even after a crash as the window is still visible) Apparently it seems to be some kind of race condition. I managed to track it down even further, and it seems to occur when you remove an object from the DOM too quickly after it's just been added. (most likely only for vml-elements with some special attribute, didn't try further) And it can't be fixed by adding a dead loop in front of removeChild(pretty bad solution anyway), the page has to be rendered by the browser once after the addChild before you can call removeChild. sigh

    Read the article

  • Calculating average (AVG) and grouping by week on large data set takes too long

    - by caioiglesias
    I'm getting average prices by week on 7 million rows, it's taking around 30 seconds to get the job done. This is the simple query: SELECT AVG(price) as price, yearWEEK(FROM_UNIXTIME(timelog)) as week from pricehistory where timelog > $range and product_id = $id GROUP BY week The only week that actually gets data changed and is worth averaging every time is always the last one, so this calculation for the whole period is a waste of resources. I just wanted to know if mysql has a tool to help out on this.

    Read the article

  • Large amount of constants in Java

    - by Lars D
    I need to include about 1 MByte of data in a Java application, for very fast and easy access in the rest of the source code. My main background is not Java, so my initial idea was to convert the data directly to Java source code, defining 1MByte of constant arrays, classes (instead of C++ struct) etc., something like this: public final/immutable/const MyClass MyList[] = { { 23012, 22, "Hamburger"} , { 28375, 123, "Kieler"} }; However, it seems that Java does not support such constructs. Is this correct? If yes, what is the best solution to this problem?

    Read the article

  • Selecting from a Large Table SQL 2005

    - by Eugene
    I have a SQL table it has more than 1000000 rows, and I need to select with the query as you can see below: SELECT DISTINCT TOP (200) COUNT(1) AS COUNT, KEYWORD FROM QUERIES WITH(NOLOCK) WHERE KEYWORD LIKE '%Something%' GROUP BY KEYWORD ORDER BY 'COUNT' DESC Could you please tell me how can I optimize it to speed up the execution process? Thank you for useful answers.

    Read the article

  • Breaking up large DataGridView for printing

    - by Hal
    Hey, I've got a single-row, 40 column-long DataGridView that i need to print. Since i can neither print it directly (because A4 sheets won't cut it ;)) nor adjust its width to the width of the page itself (because the headers look terrible), i wanted to break the DataGridView to 4 separate pieces and display 10 columns per row (imagine: column 1 to 10 in the first line, column 11 to 21 four or five lines below, etc...). Is there an easy way to do this? I was leaning towards a more manual approach (using fors), but i'd love to know if there's a more elegant way. Cheers

    Read the article

  • Downloading Large JSON File to local file using Java

    - by user1279675
    I'm attempting to download a JSON from the following URL - http://api.crunchbase.com/v/1/companies.js - to a local file. I'm using Java 1.7 and the following JSON Libraries - http://www.json.org/java/ - to attempt to make it work. Here's my code: public static void download(String address, String localFileName) { OutputStream out = null; URLConnection conn = null; InputStream in = null; try { URL url = new URL(address); out = new BufferedOutputStream( new FileOutputStream(localFileName)); conn = url.openConnection(); in = conn.getInputStream(); byte[] buffer = new byte[1024]; int numRead; long numWritten = 0; while ((numRead = in.read(buffer)) != -1) { out.write(buffer, 0, numRead); numWritten += numRead; System.out.println(buffer.length); System.out.println(" " + buffer.hashCode()); } System.out.println(localFileName + "\t" + numWritten); } catch (Exception exception) { exception.printStackTrace(); } finally { try { if (in != null) { in.close(); } if (out != null) { out.close(); } } catch (IOException ioe) { } } } When I run the code everything seems to work until midway through the loop the program seems to stop and not continue reading the JSON Object. Does anyone know why this would stop reading? How could I fix the issue?

    Read the article

  • hadoop - large database query

    - by Mastergeek
    Situation: I have a Postgres DB that contains a table with several million rows and I'm trying to query all of those rows for a MapReduce job. From the research I've done on DBInputFormat, Hadoop might try and use the same query again for a new mapper and since these queries take a considerable amount of time I'd like to prevent this in one of two ways that I've thought up: 1) Limit the job to only run 1 mapper that queries the whole table and call it good. or 2) Somehow incorporate an offset in the query so that if Hadoop does try to use a new mapper it won't grab the same stuff. I feel like option (1) seems more promising, but I don't know if such a configuration is possible. Option(2) sounds nice in theory but I have no idea how I would keep track of the mappers being made and if it is at all possible to detect that and reconfigure. Help is appreciated and I'm namely looking for a way to pull all of the DB table data and not have several of the same query running because that would be a waste of time.

    Read the article

  • Linking to a Large address aware DLL.

    - by Canopus
    Suppose I have a DLL which is built with LARGEADDRESSAWARE linker flag set. Now I have an application dynamically linking to this DLL. Does this make my application LARGEADDRESSAWARE? If not then, does it make sense to have this flag set for any DLL?

    Read the article

  • Multilangual website using ASP and a database

    - by Noam Smadja
    i want to consult about something i do. my website has 3 languages. Hebrew (main), english and russian. i am using a database having a table with the fields: ID, fieldName, 1, 2, 3. where 1 2 3 are the languages. upon entering the website language 1 (hebrew) is chosen automaticly until you choose another. and saved as a session("currentLanguage"). i wrote a function langstirng whice recieves a field name and prints the value acording to the language in session("currentLanguage"): Dim languageStrings Set languageStrings = Server.CreateObject("ADODB.Recordset") languageStrings.ActiveConnection = MM_KerenDB_STRING languageStrings.Source = "SELECT fieldName,"&current_Language&"FROM Multilangual" languageStrings.CursorType = 0 languageStrings.CursorLocation = 2 languageStrings.LockType = 1 languageStrings.Open() sub langstring(fieldName) do while NOT(languageStrings.EOF) if (languageStrings.fields.item("fieldName").value = fieldName) then exit do else languageStrings.movenext end if loop if (languageStrings.EOF) then response.Write("***"&fieldName&"***") else response.Write(languageStrings.fields.item(currentLanguage+1).value) end if languageStrings.movefirst end sub and i use it like so: <div>langstring("header")</div>. i find it stupid that i keep sending the query to the server on any and every page. since the multilangual table does not change "THAT" often i want to somehow save the recordset for the current browsing. I am looking help for THIS solution, Please.

    Read the article

  • Sending large XML from Silverlight to SVC (WCF)

    - by alexbf
    Hi! I want to send a big XML string to a WCF SVC service from Silverlight. It looks like anything under about 50k is sent correctly but if I try to send something over that limit, my request reaches the server (BeginRequest is called) but never reaches my SVC. I get the classic "NotFound" exception. Any idea on how to raise that limit? If I can't raise it? What are my other options? Thanks, Alex

    Read the article

  • Declaring two large 2d arrays gives segmentation fault.

    - by pfdevil
    Hello, i'm trying to declare and allocate memory for two 2d-arrays. However when trying to assign values to itemFeatureQ[39][16816] I get a segmentation vault. I can't understand it since I have 2GB of RAM and only using 19MB on the heap. Here is the code; double** reserveMemory(int rows, int columns) { double **array; int i; array = (double**) malloc(rows * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } for(i = 0; i < rows; i++) { array[i] = (double*) malloc(columns * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } } return array; } void populateUserFeatureP(double **userFeatureP) { int x,y; for(x = 0; x < CUSTOMERS; x++) { for(y = 0; y < FEATURES; y++) { userFeatureP[x][y] = 0; } } } void populateItemFeatureQ(double **itemFeatureQ) { int x,y; for(x = 0; x < FEATURES; x++) { for(y = 0; y < MOVIES; y++) { printf("(%d,%d)\n", x, y); itemFeatureQ[x][y] = 0; } } } int main(int argc, char *argv[]){ double **userFeatureP = reserveMemory(480189, 40); double **itemFeatureQ = reserveMemory(40, 17770); populateItemFeatureQ(itemFeatureQ); populateUserFeatureP(userFeatureP); return 0; }

    Read the article

  • Prevent RegEx Hang on Large Matches...

    - by developerjay
    This is a great regular expression for dates... However it hangs indefinitely on this one page I tried... I wanted to try this page ( http://pleac.sourceforge.net/pleac%5Fpython/datesandtimes.html ) for the fact that it does have lots of dates on it and I want to grab all of them. I don't understand why it is hanging when it doesn't on other pages... Why is my regexp hanging and/or how could I clean it up to make it better/efficient ? Python Code: monthnames = "(?:Jan\w*|Feb\w*|Mar\w*|Apr\w*|May|Jun\w?|Jul\w?|Aug\w*|Sep\w*|Oct\w*|Nov(?:ember)?|Dec\w*)" pattern1 = re.compile(r"(\d{1,4}[\/\\\-]+\d{1,2}[\/\\\-]+\d{2,4})") pattern4 = re.compile(r"(?:[\d]*[\,\.\ \-]+)*%s(?:[\,\.\ \-]+[\d]+[stndrh]*)+[:\d]*[\ ]?(PM)?(AM)?([\ \-\+\d]{4,7}|[UTCESTGMT\ ]{2,4})*"%monthnames, re.I) patterns = [pattern4, pattern1] for pattern in patterns: print re.findall(pattern, s) btw... when i say im trying it against this site.. I'm trying it against the webpage source.

    Read the article

  • how can i execute large mysql queries fast

    - by testkhan
    I have 4 mysql tables and have a single query with JOIN on multiple tables and I am requesting it via jquery ajax, but it takes to much long time from about 1-3 minutes while I want to execute them on average 2-5 seconds. is there any special way to execute the quries fast

    Read the article

  • Permissions for Large Variables to Be Sent Via Stored Procedures (SQL Server)

    - by Joe Majewski
    I can't figure out a way to allow more than 4000 bytes to be received at once via a call to a stored procedure. I am storing images in the table that are around 15 - 20 kilobytes each, but upon getting them and displaying them to the page, they are always exactly 3.91 KB in size (or 4000 bytes). Do stored procedures have a limit on how much data can be sent at once? I double-checked my data, and I am indeed only receiving the first 4000 characters from the varbinary(MAX) field. Is there a permission setting to allow more than 4k bytes at once?

    Read the article

  • Empty R environment becomes large file when saved

    - by user1052019
    I'm getting behaviour I don't understand when saving environments. The code below demonstrates the problem. I would have expected the two files (far-too-big.RData, and right-size.RData) to be the same size, and also very small because the environments they contain are empty. In fact, far-too-big.R ends up the same size as bigfile.RData. I get the same results using 2.14.1 and 2.15.2, both on WinXP 5.1 SP3. Can anyone explain why this is happening? Thanks. a <- matrix(runif(1000000, 0, 1), ncol=1000) save(a, file="c:/temp/bigfile.RData") test <- function() { load("c:/temp/bigfile.RData") test <- new.env() save(test, file="c:/temp/far-too-big.RData") test1 <- new.env(parent=globalenv()) save(test1, file="c:/temp/right-size.RData") } test()

    Read the article

  • Best way to support large dropdowns

    - by JustAProgrammer
    Say I have a report that can be restricted by specifying some value in a dropdown. This dropdown list references a table with 30,000 records. I don't think this would be feasible to populate a dropdown with! So, what is the best way to provide the user the ability to select a value given this situation? These values do not really have categories and even if I subdivided (by having some nesting dropdown situation) by the first letter of the value, that may still leave a few thousand entries. What's the best way to deal with this?

    Read the article

  • Intersection() and Except() is too slow with large collections of custom objects

    - by Theo
    I am importing data from another database. My process is importing data from a remote DB into a List<DataModel> named remoteData and also importing data from the local DB into a List<DataModel> named localData. I am then using LINQ to create a list of records that are different so that I can update the local DB to match the data pulled from remote DB. Like this: var outdatedData = this.localData.Intersect(this.remoteData, new OutdatedDataComparer()).ToList(); I am then using LINQ to create a list of records that no longer exist in remoteData, but do exist in localData, so that I delete them from local database. Like this: var oldData = this.localData.Except(this.remoteData, new MatchingDataComparer()).ToList(); I am then using LINQ to do the opposite of the above to add the new data to the local database. Like this: var newData = this.remoteData.Except(this.localData, new MatchingDataComparer()).ToList(); Each collection imports about 70k records, and each of the 3 LINQ operation take between 5 - 10 minutes to complete. How can I make this faster? Here is the object the collections are using: internal class DataModel { public string Key1{ get; set; } public string Key2{ get; set; } public string Value1{ get; set; } public string Value2{ get; set; } public byte? Value3{ get; set; } } The comparer used to check for outdated records: class OutdatedDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { var e = string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2) && ( !string.Equals(x.Value1, y.Value1) || !string.Equals(x.Value2, y.Value2) || x.Value3 != y.Value3 ); return e; } public int GetHashCode(DataModel obj) { return 0; } } The comparer used to find old and new records: internal class MatchingDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { return string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2); } public int GetHashCode(DataModel obj) { return 0; } }

    Read the article

  • sql select from a large number of IDs

    - by Claudiu
    I have a table, Foo. I run a query on Foo to get the ids from a subset of Foo. I then want to run a more complicated set of queries, but only on those IDs. Is there an efficient way to do this? The best I can think of is creating a query such as: SELECT ... --complicated stuff WHERE ... --more stuff AND id IN (1, 2, 3, 9, 413, 4324, ..., 939393) That is, I construct a huge "IN" clause. Is this efficient? Is there a more efficient way of doing this, or is the only way to JOIN with the inital query that gets the IDs? If it helps, I'm using SQLObject to connect to a PostgreSQL database, and I have access to the cursor that executed the query to get all the IDs.

    Read the article

  • Strange mod_rewrite problem; Website works partially

    - by Camran
    I have Ubuntu 9.10 Server... I need to get mod_rewrite working... the mod_rewrite module IS LOADED. On my server, the httpd.conf is empty, instead everything (almost) is in a file called apache2.conf. Anyways, I have also read I have to change the AllowOverride None to AllowOverride All in some file... My httpd.conf is empty as you know, but I have a folder called sites-enabled which contains a 000-default file. This is where I have set: AllowOverride All Now my goal as I stated in the last Q is to turn this link: http://mydomain.com/ad.php?ad_id=Bmw_nice_M3_497379462 into this: http://mydomain.com/Bmw_nice_M3_497379462 So as I got an answer in the last Q i inserted this into the htaccess file: Options +FollowSymLinks Options +Indexes RewriteEngine On RewriteCond %{REQUEST_URI} !^/ad\.php RewriteRule ^(.*)$ ad.php?ad_id=$1 [L] Now, this works (no fully) when entering the url manually in the adress bar, but my website isn't working now for some reason. It is like the website is locked down or something, and unless I change AllowOverride to None it will act like that. Any ideas why? Also another note, the links inside the rewritten url doesn't work properly (images are not shown, while some are shown)...

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >