Search Results

Search found 14924 results on 597 pages for 'selector performance'.

Page 380/597 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • PHP: Which DB/DB Engine supports search well?

    - by KeyStroke
    Hi, I'm starting a site which relies heavily on search. While it's probably going to search basic meta data in the beginning, it might grow to something bigger in the future. So which DB/DB Engine is best in your opinion when it comes to search performance and future scalability? Appreciate your help

    Read the article

  • Complex nib is slow to load

    - by Dan Ray
    I'm looking for advice about a nib that's very slow to load. It's big and complex, with lots of subviews and doodads. When I fire my UINavController to push it, it's noticeably laggy (maybe almost a second) on my 3G. It sits there with the table cell selected and nothing else happening for long enough to make you wonder if it's broken. I wonder about pre-loading it in another thread while the user is on the previous view. I could probably fire the selector in the background with a delay in the previous view's viewDidAppear, and then keep it in a property until push time comes. Thoughts?

    Read the article

  • Changing image domain / path in css for production?

    - by Neil
    Currently, for things like background images, our css files have no domain specified. This works both in our development and production environments. background-image: url(/images/bg.png); For performance reasons (cookie-less domain), we'd like to switch this: background-image: url(http://staticimagedomain.com/images/bg.png); Ideally, we don't hard code those, so our development environments can still pull locally. Any thoughts on how to best achieve this?

    Read the article

  • Do I have to duplicate this function? - jQuery

    - by Josh
    I'm using this function to create an transparent overlay of information over the current div for a web-based mobile app. Background: using jQTouch, so I have separate divs, not individual pages loading new. $(document).ready(function() { $('.infoBtn').click(function() { $('#overlay').toggleFade(400); return false; }); }); Understanding that JS will run sequentially when i click the button on the first div the function works fine. When I go to the next div if I click the same button nothing "happens" when this div is displayed, but if i go back to the first div it has actually triggered it on this page. So I logically duplicated the function and changed the CSS selector names and it works for both. But do I have to do this for each use? Is there a way to use the same selectors, but load the different content in each variation?

    Read the article

  • How to choose between UUIDs, autoincrement/sequence keys and sequence tables for database primary keys?

    - by Tim
    I'm looking at the pros and cons of these three primary methods of coming up with primary keys for database rows. So assuming I am using a database that supports more than one of these methods, is there a simple heuristic to determine what the best option would be for me? How do considerations such a distributed/multiple masters, performance requirements, ORM use, security and testing have on the choice? Any unexpected drawbacks that one might run into?

    Read the article

  • jquery: Toggle elements based on result from a function

    - by Svish
    I have a number of table rows that I would like to toggle the visibility of. They should be visible if a data item I have set on them earlier equals a selected value in a form. This is what I have so far: $('#category-selector').change(function(event) { var category_id = $(this).val(); if(!category_id) { $('tr', '#table tbody').show(); } else { $('tr', '#table tbody').toggle(); } }); Of course this just toggles them on and off. Thing is that I thought I was able to give toggle a function that would decide if each row should be on or off, but it turns out I can only give it a boolean condition which would be an all or nothing deal kind of... So, I have this function: function() { return $(this).data('category_id') == category_id; } How can I use that to go through all the rows and toggle them on or off? Or is there a better approach to this? What should I do?

    Read the article

  • Will First() perform the OrderBy()?

    - by Martin
    Is there any difference in (asymptotic) performance between Orders.OrderBy(order => order.Date).First() and Orders.Where(order => order.Date == Orders.Max(x => x.Date)); i.e. will First() perform the OrderBy()? I'm guessing no. MSDN says enumerating the collection via foreach och GetEnumerator does but the phrasing does not exclude other extensions.

    Read the article

  • New to Drupal -- how should I create a main page with a mix of dynamic and static content?

    - by Erode
    I apologize for the terribly basic question but I'm not even a particularly adept web dev. I've read that Drupal is great if you know exactly what you want to do (then the API is handy) but I don't even know what I need yet. That is what I am hoping to gain from this discussion. I want a main content page which has a fancy content slider (using jQuery or something) which will be a selector for showing some basic information on these 2 or 3 subjects. I'm stuck on where I should be writing this mix of markup. In the template? Create "content" through a content type? Since there's a fair share of CSS and markup required to do this, I don't know if I can do that through the "basic page" content type that was there. I'm looking for pointers that can teach me how I would become aware of what Drupal can and can not do. Thanks for reading, let me know if I need to clarify anything.

    Read the article

  • database design: table with large amount of columns (50+) or many sub tables with small amount of co

    - by Guillaume
    In our oroject we already have a lots of tables (100+). Some of them contains a lot of columns (50-100) and we are facing the need of adding more columns from time to time. What do you think is best - from maintenance and performance point of view - to split these huge tables in smaller entities or to keep the tables the way they are ? We are using an ORM tools, so we don't need to write custom request.

    Read the article

  • Pure python implementation of greenlet API

    - by Tristan
    The greenlet package is used by gevent and eventlet for asynchronous IO. It is written as a C-extension and therefore doesn't work with Jython or IronPython. If performance is of no concern, what is the easiest approach to implementing the greenlet API in pure Python. A simple example: def test1(): print 12 gr2.switch() print 34 def test2(): print 56 gr1.switch() print 78 gr1 = greenlet(test1) gr2 = greenlet(test2) gr1.switch() Should print 12, 56, 34 (and not 78).

    Read the article

  • Is there any advantage for using a library other than Hibernate for JPA?

    - by Jeduan Cornejo
    Hi, I've been using JPA for some time now and been in projects where we've used both Hibernate Annotations and Toplink Essentials. AFAIK the project leader chose Toplink because Netbeans had it integrated and seemed to be the easy thing to do. However when looking for help, most of the literature seemed to assume that you are using Hibernate as the JPA provider, so, the question is, is have you found any advantage, performance or otherwise for not using the de-facto standard for JPA, Hibernate?

    Read the article

  • solr schema for article->paragraph structure

    - by Ke
    Hi guys, I want to index some articles and show the paragraph number in the search result. So I guess the solr schema should looks like this: article_id, paragraph_number, paragraph_content Therefore, I need to parse article first, extract paragraphs and index it one by one. I'm worried about the performance since one article can contain 100 paragraphs. Any suggestion?

    Read the article

  • Recommended book for Sql Server query optimisation

    - by Patrick Honorez
    Even if I have made a certification exam on Sql Server Design and implementation , I have no clue about how to trace/debug/optimise performance in Sql Sever. Now the database I built is really business critical, and getting big, so it is time for me to dig into optimisation, specially regarding when/where to add indexes. Can you recommend a good book on this subject ? (smaller is better :) Just in case: I am using Sql Server 2008. Thanks

    Read the article

  • J2me - Arrays vs vector ?

    - by Galaxy
    if we have to implementations of string split for j2me, one returns vector and the other returns array , in terms of performance on hand held devices which one is the best choice ?

    Read the article

  • Consistant hashing with memcache

    - by Industrial
    Hi everyone, I am setting up a new web app that will on the client side feature a multi-memcached server environment for reliability and performance. Would it be wise for us to utilize something like Flexihash to make it better to replicate the data between the memcache servers? Reference: http://github.com/pda/flexihash Thanks!

    Read the article

  • nonatomic property in model class when using NSOperationQueue (iPhone)?

    - by Andrew B.
    I have a custom model class with an NSMutableData ivar that will be accessed by custom NSOperation subclasses (using an NSOperationQueue). I think I can guarantee thread-safe access to the ivar from multiple NSOperations by using dependencies, and I can guarantee that I don't access the ivar from other code (say my main app thread) by waiting until the Q has finished all operations. Should I use a nonatomic property specification, or leave it atomic? Is there a significant impact on performance?

    Read the article

  • Python Logging across multiple classes and files; how to configure so as to be easily disabled?

    - by mellort
    Currently, I have osmething like this in all of my classes: # Import logging to log information import logging # Set up the logger LOG_FILENAME = 'log.txt' logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG) This works well, and I get the output I want, but I would really like to have all this sort of information in one place, and be able to just do something like import myLogger and then start logging, and then hopefully be able to just go into that file and turn off logging when I need an extra performance boost. Thanks in advance

    Read the article

  • speed of map() vs. list comprehension vs. numpy vectorized function in python

    - by mcstrother
    I have a function foo(i) that takes an integer and takes a significant amount of time to execute. Will there be a significant performance difference between any of the following ways of initializing 'a': a = [foo(i) for i in xrange(100)] , a = map(foo, range(100)) , and vfoo = numpy.vectorize(foo) vfoo(range(100)) ? (I don't care whether the output is a list or a numpy array). Is there some other better way of doing this? Thanks.

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >