Search Results

Search found 32223 results on 1289 pages for 'sql 2012'.

Page 757/1289 | < Previous Page | 753 754 755 756 757 758 759 760 761 762 763 764  | Next Page >

  • Any way to make this PostgreSQL count query any faster?

    - by Ben Dauphinee
    I'm running a case-insensitive search on a table with 7.2 million rows, and I was wondering if there was any way to make this query any faster? Currently, it takes approx 11.6 seconds to execute, with just one search parameter, and I'm worried that as soon as I add more than one, this query will become massively slow. SELECT count(*) FROM "exif_parse" WHERE (description ~* 'canon')

    Read the article

  • Left outer joins that don't return all the rows from T1

    - by Summer
    Left outer joins should return at least one row from the T1 table if it matches the conditions. But what if the left outer join performs a join successfully, then finds that another criterion is not satisfied? Is there a way to get the query to return a row with T1 values and T2 values set to NULL? Here's the specific query, in which I'm trying to return a list of candidates, and the user's support for those candidates IF such support exists. SELECT c.id, c.name, s.support FROM candidates c LEFT JOIN support s on s.candidate_id = c.id WHERE c.office_id = 5059 AND c.election_id = 92 AND (s.user_id = 2 OR s.user_id IS NULL) --This line seems like the problem ORDER BY c.last_name, c.name The query joins the candidates and support table, but finds that it's a different user who supported this candidate (user_id=3, say). Then the candidate disappears entirely from the result set.

    Read the article

  • Data not entering the table

    - by Luke
    //loop through usernames to add to league table for ($i = 0; $i < count($user); $i++) { //set some new variables in an array $username = $user[$i]; $squad = $team[$i]; //add details to league table if ( $username != "Ghost") { $database->addUsersToLeagueTable($username, $squad); } } I use this code to add to the league table, the following is more code: function addUsersToLeagueTable($username, $squad) { $q = "INSERT INTO `$_SESSION[comp_name]` ( `user` , `team` , `home_games_played` , `home_wins` , `home_draws` , `home_losses` ,`home_points, `home_goals_for` , `home_goals_against` , `away_games_played` , `away_wins` , `away_draws` , `away_losses` , `away_points` , `away_goals_for` , `away_goals_against` ) VALUES ( '$username', '$squad', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0')"; return mysql_query($q, $this->connection); } Can you see any obvious reason why this isn't happening? Thanks

    Read the article

  • Using LEFT JOIN to only selection one joined row

    - by Alex
    I'm trying to LEFT JOIN two tables, to get a list of all rows from TABLE_1 and ONE related row from TABLE_2. I have tried LEFT JOIN and GROUP BY c_id, however I wan't the related row from TABLE_2 to be sorted by isHeadOffice DESC. Here are some sample tables TABLE 1 c_id Name ---------------- 1 USA 2 Canada 3 England 4 France 5 Spain TABLE2 o_id c_id Office isHeadOffice ------------------------------------------------ 1 1 New York 1 2 1 Washington 0 3 1 Boston 0 4 2 Toronto 0 5 3 London 0 6 3 Manchester 1 7 4 Paris 1 8 4 Lyon 0 So what I am trying to get from this would be something like: RESULTS c_id Name Office ---------------------------- 1 USA New York 2 Canada Toronto 3 England Manchester 4 France Paris 5 Spain NULL I'm using PHP & MySQL. Any ideas?

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • Question on different ways to link tables

    - by dotnetdev
    What is the difference between linking two tables and then the PK is an FK in the other table, but the FK has not got the primary key option (so it does not have the gold key), and having the PK in one table as a PK in another table? Am I right to think that the second option is for a many-to-many relationship? Thanks

    Read the article

  • how to know on which column,the sequence is applied?

    - by Vineet
    I have to fetch all sequences with their table name along with the column name on which sequence is applied .Some how i managed to fetch table name corresponding to sequence because in my data base sequence is stored with first name as table name from data dictionary(all_sequences and all_tables) . Please let me know how to fetch corresponding column name also if possible!!

    Read the article

  • is it possible to combine will_paginate with find_by_sql?

    - by Tam
    In my rails application I want to use will_paginate plugin to paginate on my query. Is that possible? I tried doing something like this but it didn't work: @users = User.find_by_sql(" SELECT u.id, u.first_name, u.last_name, CASE WHEN r.user_accepted =1 AND (r.friend_accepted =0 || r.friend_accepted IS NULL) .........").paginate( :page => @page, :per_page => @per_page, :conditions => conditions_hash, :order => 'first_name ASC') If not, can you recommend a way around this or a way that might work as I don't want to write my own pagination.

    Read the article

  • pgSQL query error

    - by running4surival
    i tried using this query: "SELECT * FROM guests WHERE event_id=".$id." GROUP BY member_id;" and I'm getting this error: ERROR: column "guests.id" must appear in the GROUP BY clause or be used in an aggregate function can anyone explain how i can work around this?

    Read the article

  • Results from two queries at once in sqlite?

    - by SF.
    I'm currently trying to optimize the sluggish process of retrieving a page of log entries from the SQLite database. I noticed I almost always retrieve next entries along with count of available entries: SELECT time, level, type, text FROM Logs WHERE level IN (%s) ORDER BY time DESC, id DESC LIMIT LOG_REQ_LINES OFFSET %d* LOG_REQ_LINES ; together with total count of records that can match current query: SELECT count(*) FROM Logs WHERE level IN (%s); (for a display "page n of m") I wonder, if I could concatenate the two queries, and ask them both in one sqlite3_exec() simply concatenating the query string. How should my callback function look then? Can I distinguish between the different types of data by argc? What other optimizations would you suggest?

    Read the article

  • XML Import how would you do it?

    - by Rico
    XML is used as one of our main integration points. it comes over by many clients at a time but too many clients importing at the same time can slow down our database to a crawl. Someone has to have solved a problem like this. I am basically using VB to parse through the data and import what i want and don't want. Is there a better way?

    Read the article

  • View Lambdas in Visual Studio Debugger

    - by Vaccano
    I have the a simple LinqToSQL statement that is not working. Something Like this: List<MyClass> myList = _ctx.DBList .Where(x => x.AGuidID == paramID) .Where(x => x.BBoolVal == false) .ToList(); I look at _ctx.DBList in the debugger and the second item fits both parameters. Is there a way I can dig into this more to see what is going wrong?

    Read the article

  • Mutiple FK columns all pointing to the same parent table - a good idea?

    - by Randy Minder
    For those of you who live and breath database design, have you ever found compelling reasons to have multiple FK's in a table that all point to the same parent table? We recently had to deal with a situation where we had a table that contained six columns which were all FK columns to the same parent table. We're debating whether this indicates a poor design on our part or whether this is more common than we think. Thanks very much.

    Read the article

  • update record only works when there is no auto_increment

    - by every_answer_gets_a_point
    i am accessing a mysql table through an odbc connection in excel here is how i am updating the table: With rs .AddNew ' create a new record ' add values to each field in the record .Fields("datapath") = dpath .Fields("analysistime") = atime .Fields("reporttime") = rtime .Fields("lastcalib") = lcalib .Fields("analystname") = aname .Fields("reportname") = rname .Fields("batchstate") = "bstate" .Fields("instrument") = "NA" .Update ' stores the new record End With when the schema of the table is this, updating it works: create table batchinfo(datapath text,analysistime text,reporttime text,lastcalib text,analystname text, reportname text, batchstate text, instrument text); but when i have auto_increment in there it does not work: CREATE TABLE batchinfo ( rowid int(11) NOT NULL AUTO_INCREMENT, datapath text, analysistime text, reporttime text, lastcalib text, analystname text, reportname text, batchstate text, instrument text, PRIMARY KEY (rowid) ) ENGINE=InnoDB AUTO_INCREMENT=67 DEFAULT CHARSET=latin1 has anyone experienced a problem like this where updating does not work when there is an auto_increment field involved? connection string: Private Sub ConnectDB() Set oConn = New ADODB.Connection oConn.Open "DRIVER={MySQL ODBC 5.1 Driver};" & _ "SERVER=localhost;" & _ "DATABASE=employees;" & _ "USER=root;" & _ "PASSWORD=pas;" & _ "Option=3" End Sub also here's the rs.open: rs.Open "batchinfo", oConn, adOpenKeyset, adLockOptimistic, adCmdTable

    Read the article

  • MySQL - accessing a table sum and compare to another table?

    - by assignment_operator
    This is for a homework assignment. I just plain don't understand how to do it. The instructions for this particular question is: List the branch name for all branches that have at least one book that has at least 4 copies on hand. Where the tables in question are: Branch: BranchName | BranchId Henry Downtown | 1 16 Riverview | 2 Henry On The Hill | 3 Inventory: BookId | BranchId | OnHand 1 | 1 | 2 2 | 3 | 4 3 | 1 | 8 4 | 3 | 1 5 | 1 | 2 6 | 2 | 3 From what I understand, I can get the number of OnHand per branch name with: SELECT BranchName, SUM(OnHand) FROM Branch B, Inventory I WHERE B.BranchId = I.BranchId GROUP BY BranchName; but I don't get how I'd do the comparison between the sum of OnHand per branch and 4. Any help would be appreciated, guys!

    Read the article

  • How to hide folder in SSRS Report Builder?

    - by tnafoo
    When I click File - Open on Report Builder, I can see a list of folders under Report Server Home root folder. But I don't want end-user to see any of the folders under root unless I grant them access. I tried hiding and removing permission on the folders but they are still visible in the root folder.

    Read the article

  • Get top 'n' records by report_id

    - by Skudd
    I have a simple view in my MSSQL database. It consists of the following fields: report_id INT ym VARCHAR -- YYYY-MM keyword VARCHAR(MAX) visits INT I can easily get the top 10 keyword hits with the following query: SELECT TOP 10 * FROM top_keywords WHERE ym BETWEEN '2010-05' AND '2010-05' ORDER BY visits DESC Now where it gets tricky is where I have to get the top 10 records for each report_id in the given date range (ym BETWEEN @start_date AND @end_date). How would I go about getting the top 10 for each report_id? I've stumbled across suggestions involving the use of ROW_NUMBER() and RANK(), but have been vastly unsuccessful in their implementation.

    Read the article

  • Coalesce and Case-When with To_Date not working as expected (Postgres bug?)

    - by ADTC
    I'm using Postgres 9.1. The following query does not work as expected. Coalesce should return the first non-null value. However, this query returns null (1?) instead of the date (2). select COALESCE( TO_DATE('','yyyymmdd'), --(1) TO_DATE('20130201','yyyymmdd') --(2) ); --(1) this evaluates independently to null --(2) this evaluates independently to the date, and therefore is the first non-null value What am I doing wrong? Any workaround? Edit: This may have nothing to do with Coalesce at all. I tried some experiments with Case When constructs; it turns out, Postgres has this big ugly bug where it treats TO_DATE('','yyyymmdd') as not null, even though selecting it returns null.

    Read the article

  • SQL queries to determine all values that would satisfy an arbitrary query

    - by jasterm007
    I'm trying to figure out how to efficiently run a set of queries that will provide a new table of all values that would return results for an arbitrary query. Say my table has a schema like: id name age city What is an efficient way to list all values that would return results for an arbitrary query, say "NOT city=X AND age BETWEEN Y and Z"? My naive approach for this would be to use a script and recurse through all possible combinations of {city, age, age} and see which SELECTs return more than 0 results, but that seems incredibly inefficient. I've also tried building large joins on {city, age, age} as well and basically using that table as an argument list to the query, but that quickly becomes an impossibility for queries on many columns. For simple conjunctive equality queries, i.e. "name=X and age=Y", this is much simpler, as I can do something like SELECT name, age, count(*) AS count FROM main GROUP BY name, age HAVING count > 0 But I'm having difficulty coming up with a general approach for anything more complicated than that. Any pointers in the right direction would be most helpful, thanks.

    Read the article

< Previous Page | 753 754 755 756 757 758 759 760 761 762 763 764  | Next Page >