Search Results

Search found 5021 results on 201 pages for 'limit'.

Page 48/201 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Paginating data, has to be a better way

    - by John Tyler
    I've read like 10 or so "tutorials", and they all involve the same thing: Pull a count of the data set Pull the relevant data set (LIMIT, OFFSET) IE: SELECT COUNT(*) FROM table WHERE something = ? SELECT * FROM table WHERE something =? LIMIT ? offset ?` Two very similar queries, no? There has to be a better way to do this, my dataset is 600,000+ rows and already sluggish (results are determined by over 30 where clauses, and vary from user to user, but are properly indexed of course).

    Read the article

  • MongoDB query against geospatial index with maxDistance fails from node.js client

    - by user1735497
    I want to query against a geospatial index in mongo-db (designed after this tutorial http://www.mongodb.org/display/DOCS/Geospatial+Indexing). So when I execute this from the shell everything works fine: db.sellingpoints.find(( { location : { $near: [48.190120, 16.270895], $maxDistance: 7 / 111.2 } } ); but the same query from my nodejs application (using mongoskin or mongoose), won't return any results until i set the distance-value to a very high number (5690) db.collection('sellingpoints') .find({ location: { $near: [lat,lng], $maxDistance: distance / 111.2} }) .limit(limit) .toArray(callback); Has someone any idea how to fix that?

    Read the article

  • Improve long mysql query

    - by John Adawan
    I have a php mysql query like this $query = "SELECT * FROM articles FORCE INDEX (articleindex) WHERE category='$thiscat' and did>'$thisdid' and mid!='$thismid' and status='1' and group='$thisgroup' and pid>'$thispid' LIMIT 10"; As optimization, I've indexed all the parameters in articleindex and I use force index to force mysql to use the index, supposedly for faster processing. But it seems that this query is still quite slow and it's causing a jam and maxing out the max mysql connection limit. Let's discuss how we can improve on such long query.

    Read the article

  • Improve long mysql query

    - by John Adawan
    I have a php mysql query like this $query = "SELECT * FROM articles FORCE INDEX (articleindex) WHERE category='$thiscat' and did>'$thisdid' and mid!='$thismid' and status='1' and group='$thisgroup' and pid>'$thispid' LIMIT 10"; As optimization, I've indexed all the parameters in articleindex and I use force index to force mysql to use the index, supposedly for faster processing. But it seems that this query is still quite slow and it's causing a jam and maxing out the max mysql connection limit. Let's discuss how we can improve on such long query.

    Read the article

  • Advantage Database Server: in-memory queries.

    - by ie
    As far as I know, ADS v.10 tries to keep result of query in memory until it is a quite huge. The same should be true for the __output table and for temporary tables. When the result becoming large, swapping stated. The question is what memory limit is set for a query, a worker, whatever? Could this limit be configured? Thanks.

    Read the article

  • MySQL ORDER BY rand(), name ASC

    - by Josh K
    I would like to take a database of say, 1000 users and select 20 random ones (ORDER BY rand(),LIMIT 20) then order the resulting set by the names. I came up with the following query which is not working like I hoped. SELECT * FROM users WHERE 1 ORDER BY rand(), name ASC LIMIT 20

    Read the article

  • Limiting JMS Destination Instances

    - by jumponadoughnut
    Is it possible to limit the number of JMS receiver instances to a single instance? I.e. only process a single message from a queue at any one time? The reason I ask is because I have a fairly intensive render type process to run for each message (potentially many thousands). I'd like to limit the execution of this code to a single instance at a time. My application server is JBoss AS 6.0 Any help much appreciated

    Read the article

  • Make python display help screen if no action is given

    - by luckytaxi
    Let's say a user runs the script w/o giving any paramters. How can I make it so that it defaults to ./myscript.py -h so that it shows them the help info? parser = optparse.OptionParser() parser.add_option("-d", "--directory", metavar="DIR", help="Directory to scan for big files") parser.add_option("-e", "--email", metavar='EMAIL', help='email to send the list to') parser.add_option("-l", "--limit", metavar='LIMIT', help='return number of files')

    Read the article

  • MySQL multiple dependent subqueries, painfully slow

    - by matt80
    I have a working query that retrieves the data that I need, but unfortunately it is painfully slow (runs over 3 minutes). I have indexes in place, but I think the problem is the multiple dependent subqueries. I've been trying to rewrite the query using joins but I can't seem to get it to work. Any help would be greatly appreciated. The tables: Basically, I have 2 tables. The first (prices) holds the prices of items in a store. Each row is the price of an item that day, and new rows are added every day with an updated price. The second table (watches_US) holds the item information (name, description, etc). CREATE TABLE `prices` ( `prices_id` int(11) NOT NULL auto_increment, `prices_locale` enum('CA','DE','FR','JP','UK','US') NOT NULL default 'US', `prices_watches_ID` char(10) NOT NULL, `prices_date` datetime NOT NULL, `prices_am` varchar(10) default NULL, `prices_new` varchar(10) default NULL, `prices_used` varchar(10) default NULL, PRIMARY KEY (`prices_id`), KEY `prices_am` (`prices_am`), KEY `prices_locale` (`prices_locale`), KEY `prices_watches_ID` (`prices_watches_ID`), KEY `prices_date` (`prices_date`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=61764 ; CREATE TABLE `watches_US` ( `watches_ID` char(10) NOT NULL, `watches_date_added` datetime NOT NULL, `watches_last_update` datetime default NULL, `watches_title` varchar(255) default NULL, `watches_small_image_height` int(11) default NULL, `watches_small_image_width` int(11) default NULL, `watches_description` text, PRIMARY KEY (`watches_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; The query retrieves the last 10 prices changes over a period of 30 hours, ordered by the size of the price change. So I have subqueries to get the newest price, the oldest price within 30 hours, and then to calculate the price change. Here's the query: SELECT watches_US.*, prices.*, watches_US.watches_ID as current_ID, ( SELECT prices_am FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price, ( SELECT prices_date FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price_date, ( SELECT prices_am FROM prices WHERE ( prices_watches_ID = current_ID AND prices_locale = 'US') AND ( prices_date >= DATE_SUB(new_price_date,INTERVAL 30 HOUR) ) ORDER BY prices_date ASC LIMIT 1 ) as old_price, ( SELECT ROUND(((new_price - old_price)/old_price)*100,2) ) as percent_change, ( SELECT (new_price - old_price) ) as absolute_change FROM watches_US LEFT OUTER JOIN prices ON prices.prices_watches_ID = watches_US.watches_ID WHERE ( prices_locale = 'US' ) AND ( prices_am IS NOT NULL ) AND ( prices_am != '' ) HAVING ( old_price IS NOT NULL ) AND ( old_price != 0 ) AND ( old_price != '' ) AND ( absolute_change < 0 ) AND ( prices.prices_date = new_price_date ) ORDER BY absolute_change ASC LIMIT 10 How would I rewrite this to use joins instead, or otherwise optimize this so it doesn't take over 3 minutes to get a result? Any help would be greatly appreciated! Thank you kindly.

    Read the article

  • select top 50 records from sql

    - by air
    i have following database table name tbl_rec recno uid uname points ============================ 1 a abc 10 2 b bac 8 3 c cvb 12 4 d aty 13 5 f cyu 9 ------------------------- ------------------------- i have about 5000 records in this table. i want to select first 50 higher points records. i can't use limit statement as i am already using limit for paging. Thanks

    Read the article

  • SQL query, select from 2 tables random

    - by klaus
    Hello all i have a problem that i just CANT get to work like i what it.. i want to show news and reviews (2 tables) and i want to have random output and not the same output here is my query i really hope some one can explain me what i do wrong SELECT anmeldelser.billed_sti , anmeldelser.overskrift , anmeldelser.indhold , anmeldelser.id , anmeldelser.godkendt FROM anmeldelser LIMIT 0,6 UNION ALL SELECT nyheder.id , nyheder.billed_sti , nyheder.overskrift , nyheder.indhold , nyheder.godkendt FROM nyheder ORDER BY rand() LIMIT 0,6

    Read the article

  • Amazon S3 permissions

    - by Joe
    Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?

    Read the article

  • Lib to generate SiteMap in C#

    - by acidzombie24
    Is there a lib i can use in C# to generate a sitemap for my asp.net website? I am looking for something that i can insert data into and it will tell me if the generated sitemap has reach its limit or if the file size has reach it limit. Then allows me to save it as a file

    Read the article

  • ASP.NET cache maximum size

    - by user352868
    Hi Guys What's the maximum size of ASP.NET cache (either deployer on a single server or out-of-process on a Web farm) you can have? If there is a limit on how big ASP.NET cache you can have, is there a workaround to increase that limit? Thanks james

    Read the article

  • RETS data fetching problem.

    - by Jimit
    Hello everyone, I am working on one real estate website which is Using RETS service to get the data to my local server. but I have one little bit problem here,I can fetch data from RETS which is having about 3lacks record in RETS Database but I didn't find the way,How can I fetch that all records in bunch of 50k at a time ? I didn't find any 'LIMIT' keyword on RETS.so how can I fetch without 'LIMIT' 50k records at a time? Please help me.

    Read the article

  • Update all table rows but top N in Mysql

    - by arthurprs
    I was trying to run the following query UPDATE blog_post SET `thumbnail_present`=0, `thumbnail_size`=0, `thumbnail_data`='' WHERE `blog_post` NOT IN ( SELECT `blog_post` FROM blog_post ORDER BY `blog_post` DESC LIMIT 10) But Mysql doesn't allow 'LIMIT' in an 'IN' subquery. I think I can make a select to count the table rows and then make an ordered update limited by 'COUNT - 10', but I was wondering if there is a better way. Thanks in advance.

    Read the article

  • Fetching Cassandra row keys

    - by knorv
    Assume a Cassandra datastore with 20 rows, with row keys named "r1" .. "r20". Questions: How do I fetch the row keys of the first ten rows (r1 to r10)? How do I fetch the row keys of the next ten rows (r11 to r20)? I'm looking for the Cassandra analogy to: SELECT row_key FROM table LIMIT 0, 10; SELECT row_key FROM table LIMIT 10, 10;

    Read the article

  • Httpclient not returning entire response

    - by whakojacko
    Using HttpClient 4.0, Im having an issue where the response I get from the ResponseHandler is only about half of what the actual page content should be (~61k bytes in the string vs ~125k in the page returned to a browser). I cant seem to find any place where there might be some sort of limit that would limit this. Any ideas?

    Read the article

  • Interrupt mechanism in C,C++

    - by mawia
    Hey I was writing a udp client server in which a client waits for packets from server.But I want to limit this wait for certain time.After client don't get response for a certain moment in raise an alarm,basically it comes out and start taking remedy steps.So what are the possible solution for it.I think writing a wrapper around recv will work but how exactly this has to be done,I mean how will make recv raise alarm for you after that time limit. Any Help in this regard will be appreciated. Thanks!

    Read the article

  • What is Loglevel debug?

    - by Webnet
    My error.log file for my site says Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://URL/TO/REFERER My question is, what is LogLevel? I've googled it but it seems like I'm just getting things about java. Our site is in PHP.

    Read the article

  • Since upgrading to Solaris 11, my ARC size has consistently targeted 119MB, despite having 30GB RAM. What? Why?

    - by growse
    I ran a NAS/SAN box on Solaris 11 Express before Solaris 11 was released. The box is an HP X1600 with an attached D2700. In all, 12x 1TB 7200 SATA disks, 12x 300GB 10k SAS disks in separate zpools. Total RAM is 30GB. Services provided are CIFS, NFS and iSCSI. All was well, and I had a ZFS memory usage graph looking like this: A fairly healthy Arc size of around 23GB - making use of the available memory for caching. However, I then upgraded to Solaris 11 when that came out. Now, my graph looks like this: Partial output of arc_summary.pl is: System Memory: Physical RAM: 30701 MB Free Memory : 26719 MB LotsFree: 479 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 915 MB (arcsize) Target Size (Adaptive): 119 MB (c) Min Size (Hard Limit): 64 MB (zfs_arc_min) Max Size (Hard Limit): 29677 MB (zfs_arc_max) It's targetting 119MB while sitting at 915MB. It's got 30GB to play with. Why? Did they change something? Edit To clarify, arc_summary.pl is Ben Rockwood's, and the relevent lines generating the above stats are: my $mru_size = ${Kstat}->{zfs}->{0}->{arcstats}->{p}; my $target_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c}; my $arc_min_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_min}; my $arc_max_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_max}; my $arc_size = ${Kstat}->{zfs}->{0}->{arcstats}->{size}; The Kstat entries are there, I'm just getting odd values out of them. Edit 2 I've just re-measured the arc size with arc_summary.pl - I've verified these numbers with kstat: System Memory: Physical RAM: 30701 MB Free Memory : 26697 MB LotsFree: 479 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 744 MB (arcsize) Target Size (Adaptive): 119 MB (c) Min Size (Hard Limit): 64 MB (zfs_arc_min) Max Size (Hard Limit): 29677 MB (zfs_arc_max) The thing that strikes me is that the Target Size is 119MB. Looking at the graph, it's targeted the exact same value (124.91M according to cacti, 119M according to arc_summary.pl - think the difference is just 1024/1000 rounding issues) ever since Solaris 11 was installed. It looks like the kernel's making zero effort to shift the target size to anything different. The current size is fluctuating as the needs of the system (large) fight with the target size, and it appears equilibrium is between 700 and 1000MB. So the question is now a little more pointed - why is Solaris 11 hard setting my ARC target size to 119MB, and how do I change it? Should I raise the min size to see what happens? I've stuck the output of kstat -n arcstats over at http://pastebin.com/WHPimhfg Edit 3 Ok, weirdness now. I know flibflob mentioned that there was a patch to fix this. I haven't applied this patch yet (still sorting out internal support issues) and I've not applied any other software updates. Last thursday, the box crashed. As in, completely stopped responding to everything. When I rebooted it, it came back up fine, but here's what my graph now looks like. It seems to have fixed the problem. This is proper la la land stuff now. I've literally no idea what's going on. :(

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >