Search Results

Search found 14282 results on 572 pages for 'performance counter'.

Page 133/572 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Performance issue when querying a large xml file through php/ajax on Apache Server

    - by Niall
    Hey, I have a simple "live search" (results displayed while typing) web site. This make up is Ajax to PHP querying a pretty large XML document (10,000+ lines). This is all been hosted on a local Apache server (xamp). The scale of the xml document seems to be causing huge performance issue with results taking 10ish seconds to give the results. I'm very new to PHP (this actually being my first play about) so there below is a snippet of code in case there is something obvious for($i=0; $i<($foodListXML->length); $i++){ $type=$foodListXML->item($i)->getElementsByTagName('type'); $foodnote=$foodListXML->item($i)->getElementsByTagName('foodnote'); $style=$foodListXML->item($i)->getElementsByTagName('style'); if ($type->item(0)->nodeType==1) { //find a link matching the search text if (stristr($type->item(0)->childNodes->item(0)->nodeValue,$q)){ $currentFoodName = $type->item(0)->childNodes->item(0)->nodeValue; $currentFoodStyle = $style->item(0)->childNodes->item(0)->nodeValue; $currentFoodNote = $foodnote->item(0)->childNodes->item(0)->nodeValue; if ($hint==""){ $hint= $currentFoodName . " , " . $currentFoodNote . " , <b>" . $currentFoodStyle. "</b>" . "<br>" ; } else{ $hint=$hint . $currentFoodName . " , " . $currentFoodNote . " , <b>" . $currentFoodStyle. "</b>" . "<br>" ; } } } } } Also if having the data in a DB and accessing that is faster, then I'm open to that.. All ideas really!! Thanks.

    Read the article

  • Is there a way to generate a short random id, avoiding collisions, without hitting persistent storag

    - by bshacklett
    If you've used GoToMeeting, that's the type of ID I want. I'd like it to be random so that it obfuscates the number of items being tracked and short, so that it's easy to reference manually; UUIDs are way too long. I'd like to avoid hitting persistent storage merely for performance reasons, but I can't think of any other way to avoid collisions. Is 9 digits enough to do something time-based?

    Read the article

  • Core Animation performance on iphone

    - by nico
    I'm trying to do some animations using Core Animation on the iphone. I'm using CABasicAnimation on CALayer. It's a straight forward animation from a random place at the top of the screen to the bottom of the screen at random speed, I have 30 elements that doing the same animation continuously until another action happens. But the performance on the iPhone 3G is very sluggish when the animations start. The image is only 8k. Is this the right approach? How should I change so it performs better. // image cached somewhere else. CGImageRef imageRef = [[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:name ofType:@"png"]] CGImage]; - (void)animate:(NSTimer *)timer { int startX = round(radom() % 320); float speed = 1 / round(random() % 100 + 2); CALayer *layer = [CALayer layer]; layer.name = @"layer"; layer.contents = imageRef; // cached image layer.frame = CGRectMake(0, 0, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef)); int width = layer.frame.size.width; int height = layer.frame.size.height; layer.frame = CGRectMake(startX, self.view.frame.origin.y, width, height); [effectLayer addSublayer:layer]; CGPoint start = CGPointMake(startX, 0); CGPoint end = CGPointMake(startX, self.view.frame.size.height); float repeatCount = 1e100; CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:@"position"]; animation.delegate = self; animation.fromValue = [NSValue valueWithCGPoint:start]; animation.toValue = [NSValue valueWithCGPoint:end]; animation.duration = speed; animation.repeatCount = repeatCount; animation.autoreverses = NO; animation.removedOnCompletion = YES; animation.fillMode = kCAFillModeForwards; [layer addAnimation:animation forKey:@"position"]; } The animations are fired off using a NSTimer. animationTimer = [NSTimer timerWithTimeInterval:0.2 target:self selector:@selector(animate:) userInfo:nil repeats:YES]; [[NSRunLoop currentRunLoop] addTimer:animationTimer forMode:NSDefaultRunLoopMode];

    Read the article

  • How can I speed up the rendering of my WPF ListBox?

    - by Justin Bozonier
    I have a WPF ListBox control (view code) and I am keeping maybe like 100-200 items in it. Every time the ObservableCollection it is bound to changes though it takes it a split second to update and it freezes the whole UI. Is there a way to add elements incrementally or something I can do to improve the performance of this control?

    Read the article

  • Ruby on Rails: Accessing production database data for testing

    - by williamjones
    With Ruby on Rails, is there a way for me to dump my production database into a form that the test part of Rails can access? I'm thinking either a way to turn the production database into fixtures, or else a way to migrate data from the production database into the test database that will not get routinely cleared out by Rails. I'd like to use this data for a variety of tests, but foremost in my mind is using real data with the performance tests, so that I can get a realistic understanding of load times.

    Read the article

  • Ruby on rails active-record generated SQL on Postgres

    - by jpartogi
    Dear all, Why does Ruby on rails generated more queries in the background on Postgres than MySQL? I haven't tried deploying Rails on production with Postgres yet, but I am just afraid this generated queries would affect the performance. Do you find Rails with Postgres is slower than MySQL, knowing that it produce more query on the background? Or it is relatively the same?

    Read the article

  • ASP.NET Caching : Good As Well As Bad ! Page shows old content!

    - by Shyju
    I have an ASP.NET website where i have implemented page level caching using the OutPutCache directive.This boosted the page performance.My pages has few parts(Some buttons,links and labels) which are specific to the logged in user.If user is not logged in,they will see different links.Now Since i implemented the page level caching,Even after the user logged in,It's showing the old page content(Links and buttons meant for the Non logged in User). Caching is obviously good.But how to get rid of this problem ? Do i need to completely remove caching ?

    Read the article

  • Switch statements: do you need the last break? (Javascript mainly)

    - by Jon Raasch
    When using a switch() statement, you add break; in between separate case: declarations. But what about the last one? Normally I just leave it off, but I'm wondering if this has some performance implication I'm not thinking about? I've been wondering about this for a while and don't see it asked elsewhere on Stack-O, but sorry if I missed it. I'm mainly asking this question regarding Javascript, although I'm guessing the answer will apply to all switch() statements.

    Read the article

  • Efficient paging with large tables in sql 2008

    - by Kumar
    for tables with 1,000,000 rows and possibly many many more ! haven't done any benchmarking myself so wanted to get the experts opinion. Looked at some articles on row_number() but it seems to have performance implications What are the other choices/alternatives ?

    Read the article

  • Fastest possible way to render 480 x 320 background as iPhone OpenGL ES textures

    - by unknownthreat
    I need to display 480 x 320 background image in OpenGL ES. The thing is I experienced a bit of a slow down in iPhone when I use 512 x 512 texture size. So I am finding an optimum case for rendering iPhone resolution size background in OpenGL ES. How should I slice the background in this case to obtain the best possible performance? My main concern is speed. Should I go for 256 x 256 or other texture sizes here?

    Read the article

  • Better way to do SELECT with GROUP BY

    - by Luca Romagnoli
    Hi i've wrote a query that works: SELECT `comments`.* FROM `comments` RIGHT JOIN (SELECT MAX( id ) AS id, core_id, topic_id FROM comments GROUP BY core_id, topic_id order by id desc) comm ON comm.id = comments.id LIMIT 10 I want know if it is possible (and how) to rewrite it to get better performance. Thanks

    Read the article

  • any faster alternative??

    - by kaushik
    cost=0 for i in range(12): cost=cost+math.pow(float(float(q[i])-float(w[i])),2) cost=(math.sqrt(cost)) Any faster alternative to this? i am need to improve my entire code so trying to improve each statements performance. thanking u

    Read the article

  • is there anyway to know if your supposedly fully dedicated server is really a virtually resource-sha

    - by siran
    Hi, sometimes I feel my server not responding as smoothly as I would expect (i have a Intel(R) Xeon(TM) CPU 2.80GHz Quad Core), given that for example, the 'top' commands reports a low load < 0.5, CPU are almost completely idle ... I maybe have internet connectivity issues, so I don't really know if it's me or if it's the server itself. Is there anykind of benchmarking script (or something analogous) I could run and see the actual performance of the server ?

    Read the article

  • Compiling .xsl files into .class files

    - by Alex Ciminian
    I'm currently working on a Java web project (Spring) which involves heavy use of xsl transformations. The stylesheets seldom change, so they are currently cached. I was thinking of improving performance by compiling the xsl-s into class files so they wouldn't have to be interpreted on each request. I'm new to Java, so I don't really know the ecosystem that well. What's the best way of doing this (libraries, methods etc.)? Thanks, Alex

    Read the article

  • Hadoop: Processing large serialized objects

    - by restrictedinfinity
    I am working on development of an application to process (and merge) several large java serialized objects (size of order GBs) using Hadoop framework. Hadoop stores distributes blocks of a file on different hosts. But as deserialization will require the all the blocks to be present on single host, its gonna hit the performance drastically. How can I deal this situation where different blocks have to cant be individually processed, unlike text files ?

    Read the article

  • calendar.getInstance() or calendar.clone()

    - by Pangea
    I need to make a copy of a given date 100s of times (I cannot pass-by-reference). I am wondering which of the below two are better options newTime=Calendar.getInstance().setTime(originalDate); OR newTime=originalDate.clone(); Performance is of main conern here. thx.

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • Is there a good .Net CSS aggregator that combines style sheets and minifies them?

    - by vfilby
    I am looking to see if there is an open source/free project that provides a CSS manager. I am looking for this mainly for performance tweaking and hoping there is a readymade project rather than building from scratch. Features I am looking for include: Combines multiple .css files into a single css file Optionally minifies the resulting .css file Works well with .Net (a user control, custom handler, etc) Is there a project out that that handles this?

    Read the article

  • If-else-if versus map

    - by perezvon
    Hi, Suppose I have such an if/else-if chain: if( x.GetId() == 1 ) { } else if( x.GetId() == 2 ) { } // ... 50 more else if statements What I wonder is, if I keep a map, will it be any better in terms of performance? (assuming keys are integers)

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >