Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 150/527 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • High CPU Usage with WebGL?

    - by shoosh
    I'm checking out the nightly builds of Firefox and Chromium with support of WebGL with a few demos and tutorials and I can't help but wonder about the extremely high CPU load they cause. A simple demo like this one runs at a sustained 60% of my dual core. The large version of this one maxes out the CPU to 100% and has some visible frame loss. Chromium seems to be slightly better than firefox but not by much. I'm pretty sure that if these were desktop application the CPU load would be negligible. So what's going on here? what is it doing? Running the simple scripts of these can't be that demanding. Is it the extra layer of security or something?

    Read the article

  • When should I open and close a website's cached WCF proxy?

    - by Brandon Linton
    I've browsed around the other articles on StackOverflow that relate to caching WCF proxies for reuse, and I've read this article explaining why I should explicitly open the proxy before calling anything on it. I'm still a little hazy on the best implementation details. My question is: when should I open and close proxies for service calls on a website, and what should their lifetime be (per call, per request, or per web app)? We aren't planning on leveraging cached security contexts at the moment (but it's not unforeseeable). Thanks!

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • fast algorithm implementation to sort very small set

    - by aaa
    hello. This is the problem I ran into long time ago. I thought I may ask your for your ideas. assume I have very small set of numbers (integers), 4 or 8 elements, that need to be sorted, fast. what would be the best approach/algorithm? my approach was to use the max/min functions. I guess my question pertains more to implementation, rather than type of algorithm. At this point it becomes somewhat hardware dependent , so let us assume Intel 64-bit processor with SSE3 . Thanks

    Read the article

  • Good way to make animations with cocos2d?

    - by Johnny Oin
    Hi there, I'm making a little iphone game, and I would get some clues. Let's imagine: Two background sprites moving pretty fast from right to left, and moving up and down with accelerometer. I guess I can't use animations here, cause the movement of the background is recalculated at each frame. So I use a schedule with an interval of 0.025s and move my sprites at each clock with a : sprite.position = ccp(x, y); So here is my problem: the result is laggy, with only these two sprites. I tried both declaring sprites in the header, and getting them with CCNodes and Tags. It's quite the same. So if someone can give me a hint on what is the best way to do that, that would be so nice. I wonder if the problem can't be the fact that sprites are moving very fast, but i'm not sure. Anyway, thanks for your time. J.

    Read the article

  • Slow SelectSingleNode

    - by Simon
    I have a simple structured XML file like this: <ttest ID="ttest00001", NickName="map00001"/> <ttest ID="ttest00002", NickName="map00002"/> <ttest ID="ttest00003", NickName="map00003"/> <ttest ID="ttest00004", NickName="map00004"/> ..... This xml file can be around 2.5MB. In my source code I will have a loop to get nicknames In each loop, I have something like this: nickNameLoopNum = MyXmlDoc.SelectSingleNode("//ttest[@ID=' + testloopNum + "']").Attributes["NickName"].Value This single line will cost me 30 to 40 millisecond. I searched some old articles (dated back to 2002) saying, use some sort of compiled "xpath" can help the situation, but that was 5 years ago. I wonder is there a mordern practice to make it faster? (I'm using .NET 3.5)

    Read the article

  • Efficient alternative to merge() when building dataframe from json files with R?

    - by Bryan
    I have written the following code which works, but is painfully slow once I start executing it over thousands of records: require("RJSONIO") people_data <- data.frame(person_id=numeric(0)) json_data <- fromJSON(json_file) n_people <- length(json_data) for(lender in 1:n_people) { person_dataframe <- as.data.frame(t(unlist(json_data[[person]]))) people_data <- merge(people_data, person_dataframe, all=TRUE) } output_file <- paste("people_data",".csv") write.csv(people_data, file=output_file) I am attempting to build a unified data table from a series of json-formated files. The fromJSON() function reads in the data as lists of lists. Each element of the list is a person, which then contains a list of the attributes for that person. For example: [[1]] person_id name gender hair_color [[2]] person_id name location gender height [[...]] structure(list(person_id = "Amy123", name = "Amy", gender = "F", hair_color = "brown"), .Names = c("person_id", "name", "gender", "hair_color")) structure(list(person_id = "matt53", name = "Matt", location = structure(c(47231, "IN"), .Names = c("zip_code", "state")), gender = "M", height = 172), .Names = c("person_id", "name", "location", "gender", "height")) The end result of the code above is matrix where the columns are every person-attribute that appears in the structure above, and the rows are the relevant values for each person. As you can see though, some data is missing for some of the people, so I need to ensure those show up as NA and make sure things end up in the right columns. Further, location itself is a vector with two components: state and zip_code, meaning it needs to be flattened to location.state and location.zip_code before it can be merged with another person record; this is what I use unlist() for. I then keep the running master table in people_data. The above code works, but do you know of a more efficient way to accomplish what I'm trying to do? It appears the merge() is slowing this to a crawl... I have hundreds of files with hundreds of people in each file. Thanks! Bryan

    Read the article

  • Is there a fast way to jump to element using XMLReader?

    - by Derk
    I am using XMLReader to read a large XML file with about 1 million elements on the level I am reading from. However, I've calculated it will take over 10 seconds when I jump to -for instance- element 500.000 using XMLReader::next ([ string $localname ] ) or XMLReader::read ( void ) This is not very usable. Is there a faster way to do this?

    Read the article

  • Javascript scope chain

    - by Geromey
    Hi, I am trying to optimize my program. I think I understand the basics of closure. I am confused about the scope chain though. I know that in general you want a low scope (to access variables quickly). Say I have the following object: var my_object = (function(){ //private variables var a_private = 0; return{ //public //public variables a_public : 1, //public methods some_public : function(){ debugger; alert(this.a_public); alert(a_private); }; }; })(); My understanding is that if I am in the some_public method I can access the private variables faster than the public ones. Is this correct? My confusion comes with the scope level of this. When the code is stopped at debugger, firebug shows the public variable inside the this keyword. The this word is not inside a scope level. How fast is accessing this? Right now I am storing any this.properties as another local variable to avoid accessing it multiple times. Thanks very much!

    Read the article

  • Lucene search taking TOOO long.

    - by Josh Handel
    I;m using Lucene.net (2.9.2.2) on a (currently) 70Gig index.. I can do a fairly complicated search and get all the document IDs back in 1 ~ 2 seconds.. But to actually load up all the hits (about 700 thousand in my test queries) takes 5+ minutes. We aren't using lucene for UI, this is a datastore between processes where we have hundreds of millions of pre-cached data elements, and the part I am working on exports a few specific fields from each found document. (ergo, pagination doesn't make since as this is an export between processes). My question is what is the best way to get all of the documents in a search result? currently I am using a custom collector that does a get on the document (with a MapFieldSelector) as its collecting.. I've also tried iterating through the list after the collector has finished.. but that was even worse. I'm open to ideas :-). Thanks in advance.

    Read the article

  • WPF Dragging causes renderer to stop

    - by Cameron MacFarland
    I'm having a problem with my WPF app, where any sort of drag operation stops the UI from updating. The issue seems periodic, as in, the item drags, stops, drags again, stops, etc. in 2 second intervals. It's affecting all controls, including scroll bars. If checked this question as well as this one, and it doesn't seem to be caused by window transparencies. I'm running Win7 x64 with .NET 3.5sp1. Does anyone know what might be causing this, or a way of figuring out what might be causing this?

    Read the article

  • Perf4J Logging Config Help

    - by manyxcxi
    I currently have a long running process that I am trying to analyze with Perf4J. I currently have it writing results in CSV format to its own log file using the AsyncCoalescingStatisticsAppender and a StatisticsCsvLayout on the file appender. My question is; when I try and use the --graph option from the command line (using the perf4j jar) it isn't populating the data points- it isn't populating anything. Are my appenders set incorrectly? The log file contains hundreds (sometimes thousands) of data points of about 10 different tag names. <appender name="perfAppender" class="org.apache.log4j.FileAppender"> <param name="File" value="perfStats.log"/> <layout class="org.perf4j.log4j.StatisticsCsvLayout"> </layout> </appender> <appender name="CoalescingStatistics" class="org.perf4j.log4j.AsyncCoalescingStatisticsAppender"> <!-- The TimeSlice option is used to determine the time window for which all received StopWatch logs are aggregated to create a single GroupedTimingStatistics log. Here we set it to 10 seconds, overriding the default of 30000 ms --> <param name="TimeSlice" value="10000"/> <appender-ref ref="ConsoleAppender"/> <appender-ref ref="CompositeRollingFileAppender"/> <appender-ref ref="perfAppender"/> </appender>

    Read the article

  • 2 approaches for tracking online users with Redis. Which one is faster?

    - by Stanislav
    Recently I found an nice blog post presenting 2 approaches for tracking online users of a web site with the help of Redis. 1) Smart-keys and setting their expiration http://techno-weenie.net/2010/2/3/where-s-waldo-track-user-locations-with-node-js-and-redis 2) Set-s and intersects http://www.lukemelia.com/blog/archives/2010/01/17/redis-in-practice-whos-online/ Can you judge which one should be faster and why?

    Read the article

  • yuicompressor error, not sure what is wrong?

    - by mrblah
    Hi, Very confused here, trying out the yuicompressor on a simple javascript file. My js file looks like: function splitText(text) { return text.split('-')[1]; } The error is: [INFO] Using charset Cp1252 [Error] 1:20:illegal character [Error] 1:20:syntax error [Error] 1:40:illegal character [Error] 1:49:missing ; before statement [Error] 1:50:illegal character .. .. [Error] 7:3:missing | in compound statement [error] 1:0:compilation produced 38 syntax errors ... Can someone please explain to me what is wrong?

    Read the article

  • Loading animation Memory leak

    - by Ayaz Alavi
    Hi, I have written network class that is managing all network calls for my application. There are two methods showLoadingAnimationView and hideLoadingAnimationView that will show UIActivityIndicatorView in a view over my current viewcontroller with fade background. I am getting memory leaks somewhere on these two methods. Here is the code -(void)showLoadingAnimationView { textmeAppDelegate *textme = (textmeAppDelegate *)[[UIApplication sharedApplication] delegate]; [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:YES]; if(wrapperLoading != nil) { [wrapperLoading release]; } wrapperLoading = [[UIView alloc] initWithFrame:CGRectMake(0.0, 0.0, 320.0, 480.0)]; wrapperLoading.backgroundColor = [UIColor clearColor]; wrapperLoading.alpha = 0.8; UIView *_loadingBG = [[UIView alloc] initWithFrame:CGRectMake(0.0, 0.0, 320.0, 480.0)]; _loadingBG.backgroundColor = [UIColor blackColor]; _loadingBG.alpha = 0.4; circlingWheel = [[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleWhiteLarge]; CGRect wheelFrame = circlingWheel.frame; circlingWheel.frame = CGRectMake(((320.0 - wheelFrame.size.width) / 2.0), ((480.0 - wheelFrame.size.height) / 2.0), wheelFrame.size.width, wheelFrame.size.height); [wrapperLoading addSubview:_loadingBG]; [wrapperLoading addSubview:circlingWheel]; [circlingWheel startAnimating]; [textme.window addSubview:wrapperLoading]; [_loadingBG release]; [circlingWheel release]; } -(void)hideLoadingAnimationView { [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:NO]; wrapperLoading.alpha = 0.0; [self.wrapperLoading removeFromSuperview]; //[NSTimer scheduledTimerWithTimeInterval:0.8 target:wrapperLoading selector:@selector(removeFromSuperview) userInfo:nil repeats:NO]; } Here is how I am calling these two methods [NSThread detachNewThreadSelector:@selector(showLoadingAnimationView) toTarget:self withObject:nil]; and then somewhere later in the code i am using following function call to hide animation. [self hideLoadingAnimationView]; I am getting memory leaks when I call showLoadingAnimationView function. Anything wrong in the code or is there any better technique to show loading animation when we do network calls?

    Read the article

  • Why is numpy's einsum faster than numpy's built in functions?

    - by Ophion
    Lets start with three arrays of dtype=np.double. Timings are performed on a intel CPU using numpy 1.7.1 compiled with icc and linked to intel's mkl. A AMD cpu with numpy 1.6.1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these difference will show up in microseconds not milliseconds: arr_1D=np.arange(500,dtype=np.double) large_arr_1D=np.arange(100000,dtype=np.double) arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500) arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500) First lets look at the np.sum function: np.all(np.sum(arr_3D)==np.einsum('ijk->',arr_3D)) True %timeit np.sum(arr_3D) 10 loops, best of 3: 142 ms per loop %timeit np.einsum('ijk->', arr_3D) 10 loops, best of 3: 70.2 ms per loop Powers: np.allclose(arr_3D*arr_3D*arr_3D,np.einsum('ijk,ijk,ijk->ijk',arr_3D,arr_3D,arr_3D)) True %timeit arr_3D*arr_3D*arr_3D 1 loops, best of 3: 1.32 s per loop %timeit np.einsum('ijk,ijk,ijk->ijk', arr_3D, arr_3D, arr_3D) 1 loops, best of 3: 694 ms per loop Outer product: np.all(np.outer(arr_1D,arr_1D)==np.einsum('i,k->ik',arr_1D,arr_1D)) True %timeit np.outer(arr_1D, arr_1D) 1000 loops, best of 3: 411 us per loop %timeit np.einsum('i,k->ik', arr_1D, arr_1D) 1000 loops, best of 3: 245 us per loop All of the above are twice as fast with np.einsum. These should be apples to apples comparisons as everything is specifically of dtype=np.double. I would expect the speed up in an operation like this: np.allclose(np.sum(arr_2D*arr_3D),np.einsum('ij,oij->',arr_2D,arr_3D)) True %timeit np.sum(arr_2D*arr_3D) 1 loops, best of 3: 813 ms per loop %timeit np.einsum('ij,oij->', arr_2D, arr_3D) 10 loops, best of 3: 85.1 ms per loop Einsum seems to be at least twice as fast for np.inner, np.outer, np.kron, and np.sum regardless of axes selection. The primary exception being np.dot as it calls DGEMM from a BLAS library. So why is np.einsum faster that other numpy functions that are equivalent? The DGEMM case for completeness: np.allclose(np.dot(arr_2D,arr_2D),np.einsum('ij,jk',arr_2D,arr_2D)) True %timeit np.einsum('ij,jk',arr_2D,arr_2D) 10 loops, best of 3: 56.1 ms per loop %timeit np.dot(arr_2D,arr_2D) 100 loops, best of 3: 5.17 ms per loop The leading theory is from @sebergs comment that np.einsum can make use of SSE2, but numpy's ufuncs will not until numpy 1.8 (see the change log). I believe this is the correct answer, but have not been able to confirm it. Some limited proof can be found by changing the dtype of input array and observing speed difference and the fact that not everyone observes the same trends in timings.

    Read the article

  • Image 8-connectivity without excessive branching?

    - by shoosh
    I'm writing a low level image processing algorithm which needs to do alot of 8-connectivity checks for pixels. For every pixel I often need to check the pixels above it, below it and on its sides and diagonals. On the edges of the image there are special cases where there are only 5 or 3 neighbors instead of 8 neighbors for a pixels. The naive way to do it is for every access to check if the coordinates are in the right range and if not, return some default value. I'm looking for a way to avoid all these checks since they introduce a large overhead to the algorithm. Are there any tricks to avoid it altogether?

    Read the article

  • has anyone used simile timeline with large amounts of data

    - by oo
    i am using this simile timeline with large amounts of data and i keep getting firefox popping up saying "a script has appeared to no longer be running, do you want to kill it"? is there a limit to the amount of json you can send back to it. I have about 1000 different timeline points with dates, descriptions, etc.

    Read the article

  • Tuning MySQL to take advantage of a 4GB VPS

    - by alistair.mp
    Hello, We're running a large site at the moment which has a dedicated VPS for it's database server which is running MySQL and nothing else. At the moment all four CPU cores are running at close to 100% all of the time but the memory usage sticks at around 268MB out of an available 4096MB. I'm wondering what we can do to better utilise the memory and reduce the CPU load by tweaking MySQL's settings? Here is what we currently have in my.cnf: http://pastie.org/private/hxeji9o8n3u9up9mvtinbq Thanks

    Read the article

  • High CPU usage when running several "java -version" in parallel

    - by Prateesh
    This is just out of curiosity to understand i have a small shell script for ((i = 0; i < 50; i++)) do java -version & done when i run this my CPU usage report by sar is as below 07:51:25 PM CPU %user %nice %system %iowait %steal %idle 07:51:30 PM all 6.98 0.00 1.75 1.00 0.00 90.27 07:51:31 PM all 43.00 0.00 12.00 0.00 0.00 45.00 07:51:32 PM all 86.28 0.00 13.72 0.00 0.00 0.00 07:51:33 PM all 5.25 0.00 1.75 0.50 0.00 92.50 As you can see, on the third line the CPU is at 100% My java version is 1.5.0_22-b03.

    Read the article

  • Practical approach to concurrency control

    - by Industrial
    Hi everyone, I'd read this article recently and are very interested on how to make a practical approach to Concurrency control on a web server. The server will run CentOS + PHP + mySQL with Memcached. How would you set it up to work? http://saasinterrupted.com/2010/02/05/high-availability-principle-concurrency-control/ Thanks!

    Read the article

  • Make compiler copy characters using movsd

    - by Suma
    I would like to copy a relatively short sequence of memory (less than 1 KB, typically 2-200 bytes) in a time critical function. The best code for this on CPU side seems to be rep movsd. However I somehow cannot make my compiler to generate this code. I hoped (and I vaguely remember seeing so) using memcpy would do this using compiler built-in instrinsic, but based on disassembly and debugging it seems compiler is using call to memcpy/memmove library implementation instead. I also hoped the compiler might be smart enough to recognize following loop and use rep movsd on its own, but it seems it does not. char *dst; const char *src; // ... for (int r=size; --r>=0; ) *dst++ = *src++; Is there some way to make the Visual Studio compiler to generate rep movsd sequence other than using inline assembly?

    Read the article

  • Bulk inserts into sqlite db on the iphone...

    - by akaii
    I'm inserting a batch of 100 records, each containing a dictonary containing arbitrarily long HTML strings, and by god, it's slow. On the iphone, the runloop is blocking for several seconds during this transaction. Is my only recourse to use another thread? I'm already using several for acquiring data from HTTP servers, and the sqlite documentation explicitly discourages threading with the database, even though it's supposed to be thread-safe... Is there something I'm doing extremely wrong that if fixed, would drastically reduce the time it takes to complete the whole operation? NSString* statement; statement = @"BEGIN EXCLUSIVE TRANSACTION"; sqlite3_stmt *beginStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &beginStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); return; } if (sqlite3_step(beginStatement) != SQLITE_DONE) { sqlite3_finalize(beginStatement); printf("db error: %s\n", sqlite3_errmsg(database)); return; } NSTimeInterval timestampB = [[NSDate date] timeIntervalSince1970]; statement = @"INSERT OR REPLACE INTO item (hash, tag, owner, timestamp, dictionary) VALUES (?, ?, ?, ?, ?)"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, [statement UTF8String], -1, &compiledStatement, NULL) == SQLITE_OK) { for(int i = 0; i < [items count]; i++){ NSMutableDictionary* item = [items objectAtIndex:i]; NSString* tag = [item objectForKey:@"id"]; NSInteger hash = [[NSString stringWithFormat:@"%@%@", tag, ownerID] hash]; NSInteger timestamp = [[item objectForKey:@"updated"] intValue]; NSData *dictionary = [NSKeyedArchiver archivedDataWithRootObject:item]; sqlite3_bind_int( compiledStatement, 1, hash); sqlite3_bind_text( compiledStatement, 2, [tag UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text( compiledStatement, 3, [ownerID UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_int( compiledStatement, 4, timestamp); sqlite3_bind_blob( compiledStatement, 5, [dictionary bytes], [dictionary length], SQLITE_TRANSIENT); while(YES){ NSInteger result = sqlite3_step(compiledStatement); if(result == SQLITE_DONE){ break; } else if(result != SQLITE_BUSY){ printf("db error: %s\n", sqlite3_errmsg(database)); break; } } sqlite3_reset(compiledStatement); } timestampB = [[NSDate date] timeIntervalSince1970] - timestampB; NSLog(@"Insert Time Taken: %f",timestampB); // COMMIT statement = @"COMMIT TRANSACTION"; sqlite3_stmt *commitStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &commitStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); } if (sqlite3_step(commitStatement) != SQLITE_DONE) { printf("db error: %s\n", sqlite3_errmsg(database)); } sqlite3_finalize(beginStatement); sqlite3_finalize(compiledStatement); sqlite3_finalize(commitStatement);

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >