Search Results

Search found 3721 results on 149 pages for 'postgresql newbie'.

Page 47/149 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • How to call Postgres function returning SETOF record?

    - by Peter
    I have written the following function: -- Gets stats for all markets CREATE OR REPLACE FUNCTION GetMarketStats ( ) RETURNS SETOF record AS $$ BEGIN SELECT 'R approved offer' AS Metric, SUM(CASE WHEN M.MarketName = 'A+' AND M.Term = 24 THEN LO.Amount ELSE 0 end) AS MarketAPlus24, SUM(CASE WHEN M.MarketName = 'A+' AND M.Term = 36 THEN LO.Amount ELSE 0 end) AS MarketAPlus36, SUM(CASE WHEN M.MarketName = 'A' AND M.Term = 24 THEN LO.Amount ELSE 0 end) AS MarketA24, SUM(CASE WHEN M.MarketName = 'A' AND M.Term = 36 THEN LO.Amount ELSE 0 end) AS MarketA36, SUM(CASE WHEN M.MarketName = 'B' AND M.Term = 24 THEN LO.Amount ELSE 0 end) AS MarketB24, SUM(CASE WHEN M.MarketName = 'B' AND M.Term = 36 THEN LO.Amount ELSE 0 end) AS MarketB36 FROM "Market" M INNER JOIN "Listing" L ON L.MarketID = M.MarketID INNER JOIN "ListingOffer" LO ON L.ListingID = LO.ListingID; END $$ LANGUAGE plpgsql; And when trying to call it like this... select * from GetMarketStats() AS (Metric VARCHAR(50),MarketAPlus24 INT,MarketAPlus36 INT,MarketA24 INT,MarketA36 INT,MarketB24 INT,MarketB36 INT); I get an error: ERROR: query has no destination for result data HINT: If you want to discard the results of a SELECT, use PERFORM instead. CONTEXT: PL/pgSQL function "getmarketstats" line 2 at SQL statement I don't understand this output. I've tried using perform too, but I thought one only had to use that if the function doesn't return anything.

    Read the article

  • Why would I get a duplicate key error when updating a row?

    - by hdx
    I'm using postgres and I'm getting the duplicate key error when updating a row: cursor.execute("UPDATE jiveuser SET userenabled = 0 WHERE userid = %s" % str(userId)) psycopg2.IntegrityError: duplicate key value violates unique constraint "jiveuser_pk" I don't understand how updating a row can cause this error... any help will be much appreciated.

    Read the article

  • Delphi, PGDac vs Zeos, Fetch, Lookup?

    - by durumdara
    Hi! I used Zeos to test to know: is ZTable uses fetch technics, or not? May in the future we migrate our lesser system to PGSQL, and this used now "Table" components (as BDE, but it have an SQL-like server). These tables use real cursors, a "Window" with N record, so lookup is very fast, because the Locate/Lookup is started on server, and only these N records are refreshed, no matter, how many records in the lookup table. PGSQL uses fetch technics as I know, and I tested it with a table (id int, name varchar(100)), and 1 million records. (I also trying this with mysql). The adapter is Zeos. ID, sec to find, allocated memory in bytes on client. MySQL 500000 2,761 113 196 344 1000000 3,214 225 471 232 313800 0,437 225 471 232 328066 0,468 225 471 232 276374 0,390 225 471 232 905984 1,264 225 471 232 260253 0,359 225 471 232 PGSQL 500000 3,042 113 188 184 1000000 3,744 225 463 064 313800 0,436 225 463 064 328066 0,452 225 463 064 276374 0,375 225 463 064 905984 1,295 225 463 064 260253 0,359 225 463 064 142023 0,203 225 463 064 As you see the records are fetched locally, this cause the 225 MB usage, and searches are slow a little, based where is the record we must find. I want to ask more things: a.) Is PGDAC have some technics to we can use the lookups without pay the fetch with memory and secs? b.) Or is PG ODBC driver can help in this problem with ADO? (As I know ADO can use server side cursors)? c.) Have anybody some experience with lookup tables, and performance? Is this critical question or it is not? (With client memory usage too). d.) If no chance to avoid fetch hell with lookups, what we can do? Server Side Joins, and unique code for Lookup field changing without real Lookup? Thanks for your help: dd

    Read the article

  • Date arithmetic using integer values

    - by Dave Jarvis
    Problem String concatenation is slowing down a query: date(extract(YEAR FROM m.taken)||'-1-1') d1, date(extract(YEAR FROM m.taken)||'-1-31') d2 This is realized in code as part of a string, which follows (where the p_ variables are integers): date(extract(YEAR FROM m.taken)||''-'||p_month1||'-'||p_day1||''') d1, date(extract(YEAR FROM m.taken)||''-'||p_month2||'-'||p_day2||''') d2 This part of the query runs in 3.2 seconds with the dates, and 1.5 seconds without, leading me to believe there is ample room for improvement. Question What is a better way to create the date (presumably without concatenation)? Many thanks!

    Read the article

  • Inserting null fields with dbi:Pg

    - by User1
    I have a Perl script inserting data into Postgres according to a pipe delimited text file. Sometimes, a field is null (as expected). However, Perl makes this field into an empty string and the Postgres insert statement fails. Here's a snippet of code: use DBI; #Connect to the database. $dbh=DBI-connect('dbi:Pg:dbname=mydb','mydb','mydb',{AutoCommit=1,RaiseError=1,PrintError=1}); #Prepare an insert. $sth=$dbh-prepare("INSERT INTO mytable (field0,field1) SELECT ?,?"); while (<){ #Remove the whitespace chomp; #Parse the fields. @field=split(/\|/,$_); print "$_\n"; #Do the insert. $sth-execute($field[0],$field[1]); } And if the input is: a|1 b| c|3 EDIT: Use this input instead. a|1|x b||x c|3|x It will fail at b|. DBD::Pg::st execute failed: ERROR: invalid input syntax for integer: "" I just want it to insert a null on field1 instead. Any ideas? EDIT: I simplified the input at the last minute. The old input actually made it work for some reason. So now I changed the input to something that will make the program fail. Also note that field1 is a nullable integer datatype.

    Read the article

  • Rails: three most recent comments with unique users

    - by Dennis Collective
    what would I put in the named scope :by_unique_users so that I can do Comment.recent.by_unique_users.limit(3), and only get one comment per user? class User has_many :comments end class Comment belongs_to :user named_scope :recent, :order => 'comments.created_at DESC' named_scope :limit, lambda { |limit| {:limit => limit}} named_scope :by_unique_users end on sqlite named_scope :by_unique_user, :group = "user_id" works, but makes it freak out on postgres, which is deployed on production PGError: ERROR: column "comments.id" must appear in the GROUP BY clause or be used in an aggregate function

    Read the article

  • Debugging SQL in PGAdmin3 when sql contains variables

    - by Mr Shoubs
    In SQL Server I could copy sql code out of an application and paste it into SSMS, declare & assign vars that exist in the sql and run.. yay great debugging scenario. e.g. (please note I am rust and syntax may be incorrect) declare @x as varchar(10) set @x = 'abc' select * from sometable where somefield = @x I want to do something simular with postgres in pgadmin3 (or another postgres tool, anyy reccomendations?) I realise you can create pgscript, but it doesn't appear to be very good, for example, if I do the equlivent of above, it doesn't put the single quotes around the value in @x, nor does it let me by doubling them up and you don't get a table out after - only text... Currently I have a peice of sql someone has written that has 3 unique varibles in it which are used around 6 times each... So the question is how do other people debug sql this sql EFFICIENTLY, preferably in a simular fashion to my sql server days.

    Read the article

  • Inserting Newline from XML to Database

    - by blackmage
    I am trying to parse this xml document in which a newline is required for certain fields and must be inserted into the database with the newline. But I've been running into problems. 1)First Problem: \n Character The first problem I had was using the \n like below. <javascript>jquery_ui.js\nshadowbox_modal.js\nuser_profile.js\ntablesorter.js</javascript> The problem was in the database the field came out ot be jquery_ui.js\nshadowbox_modal.js\n... and when output into html it was jquery_ui.jsnshadowbox_modal.jsn............... 2) Then I tried actually having newlines in the xml <javascript>jquery_ui.js shadowbox_modal.js user_profile.js tablesorter.js</javascript> The problem was the output become %20%20%20%20%20%20%20%20%20%20shadowbox_modal.js, and so forth. So how can I get a newline to hold from xml when entered into a database and then output with the newline still?

    Read the article

  • "PGError: no connection to the server" after idle

    - by user573484
    After my application sits idle overnight, when I try to access it in the morning I get a 500 Internal server error and the logs indicate "PGError: no connection to the server". After this first request if I refresh the page again everything is fine. I'm running Ubuntu 10.04 with apache 2, Passenger 3.0.2, Rails 2.3.8, and Postgres 8.4 on a remote server. Any ideas how to fix this? Here is the log: Processing ApplicationController#index (for 192.168.1.33 at 2011-01-06 17:28:14) [GET] Parameters: {"action"=>"index", "controller"=>"da"} ActiveRecord::StatementInvalid (PGError: no connection to the server : SELECT * FROM "users" WHERE ("users"."id" = 1) LIMIT 1): app/controllers/application_controller.rb:47:in `current_user' app/controllers/application_controller.rb:51:in `set_current_user' app/controllers/application_controller.rb:123:in `render_optional_error_file' passenger (3.0.2) lib/phusion_passenger/rack/request_handler.rb:96:in `process_request' passenger (3.0.2) lib/phusion_passenger/abstract_request_handler.rb:513:in `accept_and_process_next_request' passenger (3.0.2) lib/phusion_passenger/abstract_request_handler.rb:274:in `main_loop' passenger (3.0.2) lib/phusion_passenger/classic_rails/application_spawner.rb:321:in `start_request_handler' passenger (3.0.2) lib/phusion_passenger/classic_rails/application_spawner.rb:275:in `send' passenger (3.0.2) lib/phusion_passenger/classic_rails/application_spawner.rb:275:in `handle_spawn_application' passenger (3.0.2) lib/phusion_passenger/utils.rb:479:in `safe_fork' passenger (3.0.2) lib/phusion_passenger/classic_rails/application_spawner.rb:270:in `handle_spawn_application' passenger (3.0.2) lib/phusion_passenger/abstract_server.rb:357:in `__send__' passenger (3.0.2) lib/phusion_passenger/abstract_server.rb:357:in `server_main_loop' passenger (3.0.2) lib/phusion_passenger/abstract_server.rb:206:in `start_synchronously' passenger (3.0.2) lib/phusion_passenger/abstract_server.rb:180:in `start' passenger (3.0.2) lib/phusion_passenger/classic_rails/application_spawner.rb:149:in `start' passenger (3.0.2) lib/phusion_passenger/spawn_manager.rb:219:in `spawn_rails_application' passenger (3.0.2) lib/phusion_passenger/abstract_server_collection.rb:132:in `lookup_or_add' passenger (3.0.2) lib/phusion_passenger/spawn_manager.rb:214:in `spawn_rails_application' passenger (3.0.2) lib/phusion_passenger/abstract_server_collection.rb:82:in `synchronize' passenger (3.0.2) lib/phusion_passenger/abstract_server_collection.rb:79:in `synchronize' passenger (3.0.2) lib/phusion_passenger/spawn_manager.rb:213:in `spawn_rails_application' passenger (3.0.2) lib/phusion_passenger/spawn_manager.rb:132:in `spawn_application' passenger (3.0.2) lib/phusion_passenger/spawn_manager.rb:275:in `handle_spawn_application' passenger (3.0.2) lib/phusion_passenger/abstract_server.rb:357:in `__send__' passenger (3.0.2) lib/phusion_passenger/abstract_server.rb:357:in `server_main_loop' passenger (3.0.2) lib/phusion_passenger/abstract_server.rb:206:in `start_synchronously' passenger (3.0.2) helper-scripts/passenger-spawn-server:99 /!\ FAILSAFE /!\ Thu Jan 06 17:28:14 -0700 2011 Status: 500 Internal Server Error PGError: no connection to the server : SELECT * FROM "users" WHERE ("users"."id" = 1) LIMIT 1 /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract_adapter.rb:221:in `log' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/postgresql_adapter.rb:520:in `execute' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/postgresql_adapter.rb:1002:in `select_raw' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/postgresql_adapter.rb:989:in `select' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract/database_statements.rb:7:in `select_all_without_query_cache' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract/query_cache.rb:60:in `select_all' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract/query_cache.rb:81:in `cache_sql' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract/query_cache.rb:60:in `select_all' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:664:in `find_by_sql' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:1578:in `find_every' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:1535:in `find_initial' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:616:in `find' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:1910:in `find_by_id' /home/user/application/releases/20110106230903/app/controllers/application_controller.rb:47:in `current_user' /home/user/application/releases/20110106230903/app/controllers/application_controller.rb:51:in `set_current_user' /home/user/application/releases/20110106230903/app/controllers/application_controller.rb:123:in `render_optional_error_file' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/rescue.rb:97:in `rescue_action_in_public' /home/user/application/releases/20110106230903/vendor/plugins/exception_notification/lib/exception_notification/notifiable.rb:48:in `rescue_action_in_public' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/rescue.rb:154:in `rescue_action_without_handler' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/rescue.rb:74:in `rescue_action' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/base.rb:532:in `send' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/base.rb:532:in `process_without_filters' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/filters.rb:606:in `process' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/rescue.rb:65:in `call_with_exception' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/dispatcher.rb:90:in `dispatch' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/dispatcher.rb:121:in `_call' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/dispatcher.rb:130 /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/query_cache.rb:29:in `call' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/query_cache.rb:29:in `call' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in `cache' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/query_cache.rb:9:in `cache' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/query_cache.rb:28:in `call' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in `call' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/string_coercion.rb:25:in `call' /var/lib/gems/1.8/gems/rack-1.1.0/lib/rack/head.rb:9:in `call' /var/lib/gems/1.8/gems/rack-1.1.0/lib/rack/methodoverride.rb:24:in `call' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/params_parser.rb:15:in `call' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/session/cookie_store.rb:99:in `call' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/failsafe.rb:26:in `call' /var/lib/gems/1.8/gems/rack-1.1.0/lib/rack/lock.rb:11:in `call' /var/lib/gems/1.8/gems/rack-1.1.0/lib/rack/lock.rb:11:in `synchronize' /var/lib/gems/1.8/gems/rack-1.1.0/lib/rack/lock.rb:11:in `call' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/dispatcher.rb:106:in `call' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/rack/request_handler.rb:96:in `process_request' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_request_handler.rb:513:in `accept_and_process_next_request' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_request_handler.rb:274:in `main_loop' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/classic_rails/application_spawner.rb:321:in `start_request_handler' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/classic_rails/application_spawner.rb:275:in `send' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/classic_rails/application_spawner.rb:275:in `handle_spawn_application' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/utils.rb:479:in `safe_fork' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/classic_rails/application_spawner.rb:270:in `handle_spawn_application' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb:357:in `__send__' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb:357:in `server_main_loop' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb:206:in `start_synchronously' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb:180:in `start' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/classic_rails/application_spawner.rb:149:in `start' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/spawn_manager.rb:219:in `spawn_rails_application' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server_collection.rb:132:in `lookup_or_add' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/spawn_manager.rb:214:in `spawn_rails_application' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server_collection.rb:82:in `synchronize' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server_collection.rb:79:in `synchronize' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/spawn_manager.rb:213:in `spawn_rails_application' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/spawn_manager.rb:132:in `spawn_application' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/spawn_manager.rb:275:in `handle_spawn_application' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb:357:in `__send__' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb:357:in `server_main_loop' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb:206:in `start_synchronously' /var/lib/gems/1.8/gems/passenger-3.0.2/helper-scripts/passenger-spawn-server:99

    Read the article

  • postgres - regex_replace in distinct clause?

    - by n00b0101
    Ok... changing the question here... I'm getting an error when I try this: SELECT COUNT ( DISTINCT mid, regexp_replace(na_fname, '\\s*', '', 'g'), regexp_replace(na_lname, '\\s*', '', 'g')) FROM masterfile; Is it possible to use regexp in a distinct clause like this? The error is this: WARNING: nonstandard use of \\ in a string literal LINE 1: ...CT COUNT ( DISTINCT mid, regexp_replace(na_fname, '\\s*', ''...

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • SQL for selecting only the last item from a versioned table

    - by Jeremy
    I have a table with the following structure: CREATE TABLE items ( id serial not null, title character varying(255), version_id integer DEFAULT 1, parent_id integer, CONSTRAINT items_pkey PRIMARY KEY (id) ) So the table contains rows that looks as so: id: 1, title: "Version 1", version_id: 1, parent_id: 1 id: 2, title: "Version 2", version_id: 2, parent_id: 1 id: 3, title: "Completely different record", version_id: 1, parent_id: 3 This is the SQL code I've got for selecting out all of the records with the most recent version ids: select * from items inner join ( select parent_id, max(version_id) as version_id from items group by parent_id) as vi on items.version_id = vi.version_id and items.parent_id = vi.parent_id Is there a way to write that SQL statement so that it doesn't use a subselect?

    Read the article

  • Can't install egenix-mx-base on Django production VPS

    - by Shane
    I have been following these instructions for setting up a Django production server with postgres, apache, nginx, and memcache. My problem is that I cannot get egenix-mx-base to install and without this I cannot get psycopg2 to work and therefore no database access :(. I am attempting this on a VPS running a clean install of Ubuntu Hardy (8.04) and have followed all instructions on the site to a T. The error message is as follows: $ easy_install egenix-mx-base Searching for egenix-mx-base Reading http://pypi.python.org/simple/egenix-mx-base/ Reading http://www.egenix.com/products/python/mxBase/ Reading http://www.lemburg.com/python/mxExtensions.html Reading http://www.egenix.com/ Best match: egenix-mx-base 3.1.3 Downloading http://downloads.egenix.com/python/egenix-mx-base-3.1.3.tar.gz Processing egenix-mx-base-3.1.3.tar.gz Running egenix-mx-base-3.1.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-iF7qzl/egenix-mx-base-3.1.3/egg-dist-tmp-laxvcS Warning: Can't read registry to find the necessary compiler setting Make sure that Python modules _winreg, win32api or win32con are installed. In file included from mx/TextTools/mxTextTools/mxte.c:42: mx/TextTools/mxTextTools/mxte_impl.h: In function ‘mxTextTools_TaggingEngine’: mx/TextTools/mxTextTools/mxte_impl.h:345: warning: pointer targets in initialization differ in signedness mx/TextTools/mxTextTools/mxte_impl.h:364: warning: pointer targets in initialization differ in signedness mx/URL/mxURL/mxURL.c: In function ‘mxURL_SetFromString’: mx/URL/mxURL/mxURL.c:676: warning: pointer targets in initialization differ in signedness mx/UID/mxUID/mxUID.c: In function ‘mxUID_Verify’: mx/UID/mxUID/mxUID.c:333: warning: pointer targets in passing argument 1 of ‘sscanf’ differ in signedness mx/UID/mxUID/mxUID.c: In function ‘mxUID_New’: mx/UID/mxUID/mxUID.c:462: warning: pointer targets in passing argument 1 of ‘mxUID_CRC16’ differ in signedness error: Setup script exited with error: build/bdist.linux-x86_64-py2.5_ucs4/dumb/egenix_mx_base-3.1.3-py2.5.egg-info: Is a directory Thank you to anyone who takes the time to try to help me.

    Read the article

  • phppgadmin : How does it kick users out of postgres, so it can db_drop?

    - by egarcia
    I've got one Posgresql database (I'm the owner) and I'd like to drop it and re-create it from a dump. Problem is, there're a couple applications (two websites, rails and perl) that access the db regularly. So I get a "database is being accessed by other users" error. I've read that one possibility is getting the pids of the processes involved and killing them individually. I'd like to do something cleaner, if possible. Phppgadmin seems to do what I want: I am able to drop schemas using its web interface, even when the websites are on, without getting errors. So I'm investigating how its code works. However, I'm no PHP expert. I'm trying to understand the phppgadmin code in order to see how it does it. I found out a line (257 in Schemas.php) where it says: $data->dropSchema(...) $data is a global variable and I could not find where it is defined. Any pointers would be greatly appreciated.

    Read the article

  • Generic Method to find the tuples used for computation in Postgres?

    - by Rahul
    If I have a table col1 | name | pay ------+------------------+------ 1 | Steve Jobs | 1006 2 | Mike Markkula | 1007 3 | Mike Scott | 1978 4 | John Sculley | 1983 5 | Michael Spindler | 1653 The user executes a sum query which sums the pay of people getting paid more than $1500. Is there a way to also implicitly know which tuples have been used which satisfy the condition for sum ? I know you can separately write another query to just return the primary key ids which satisfy the condition. But, Is there any other way to do that in the same query ? probably rewrite the query in some way ? or... any suggestion ?

    Read the article

  • I don't find the sql request

    - by user301089
    Hi everybody, Here it's my problem I've a list of the following measure : src1 dst2 24th december 2009 src1 dst3 22th december 2009 src1 dst2 18th december 2009 I would like to have just the latest measures with a sql request - 2 first lines in my case because the pairs(src and dst) aren't the same. I try to use DISTINCT but I have just the 2 first columns and I will all columns. I try too GROUP BY but I hadn't success. Anyone can help me ? Thx Narglix

    Read the article

  • Porting join from Oracle to Postgres

    - by Grasper
    INSERT INTO MISSION_OBJECTIVE( MSN_INT_ID, MO_INT_ID, MO_MSN_CLASS_NM, MO_MSN_CLASS_CD, MO_MSN_TYPE, MO_PRIORITY, MO_COMMENT, MO_START_DT, MO_END_DT, ASP_AIRSPACE_NM, MO_OBJ_LOCATION, MO_ALO_LEG_ID, MO_ALO_ARRIVE_LOC) SELECT '1025', '1', 'AIRDROP', 'ADP', 'LAPES', NULL, COALESCE( NULL, ' '), TO_TIMESTAMP( '1002260900', 'YYMMDDHH24MI'), TO_TIMESTAMP( '1002260915', 'YYMMDDHH24MI'), 'TRANSIT ALPHA', 'TRANSIT ALPHA', '1', 'TRANSIT ALPHA' FROM AIRSPACE ASP, apsmain .MISSION_CLASS MC WHERE ASP.ASP_AIRSPACE_NM(+)= 'TRANSIT ALPHA' AND MC.MCS_MISSION_CLASS_NAME= 'AIRDROP' AND 'TRANSIT ALPHA' IS NOT NULL Is that exactly the same as: INSERT INTO MISSION_OBJECTIVE( MSN_INT_ID, MO_INT_ID, MO_MSN_CLASS_NM, MO_MSN_CLASS_CD, MO_MSN_TYPE, MO_PRIORITY, MO_COMMENT, MO_START_DT, MO_END_DT, ASP_AIRSPACE_NM, MO_OBJ_LOCATION, MO_ALO_LEG_ID, MO_ALO_ARRIVE_LOC) SELECT '1025', '1', 'AIRDROP', 'ADP', 'LAPES', NULL, COALESCE( NULL, ' '), TO_TIMESTAMP( '1002260900', 'YYMMDDHH24MI'), TO_TIMESTAMP( '1002260915', 'YYMMDDHH24MI'), 'TRANSIT ALPHA', 'TRANSIT ALPHA', '1', 'TRANSIT ALPHA' FROM AIRSPACE ASP, apsmain .MISSION_CLASS MC WHERE ASP.ASP_AIRSPACE_NM = 'TRANSIT ALPHA' AND MC.MCS_MISSION_CLASS_NAME= 'AIRDROP' AND 'TRANSIT ALPHA' IS NOT NULL I just deleted the (+). The part that is confusing me is that ASP.ASP_AIRSPACE_NM is being right joined to a constant.

    Read the article

  • PL/PGSQL function, having trouble accessing a returned result set from psycopg2...

    - by Paul
    I have this pl/pgsql function: CREATE OR REPLACE FUNCTION get_result(id integer) RETURNS SETOF my_table AS $ DECLARE result_set my_table%ROWTYPE; BEGIN IF id=0 THEN SELECT INTO result_set my_table_id, my_table_value FROM my_table; ELSE SELECT INTO result_set my_table_id, my_table_value FROM my_table WHERE my_table_id=id; END IF; RETURN; END; $ LANGUAGE plpgsql; I am trying to use this with Python's psycopg2 library. Here is the python code: import psycopg2 as pg conn = pg.connect(host='myhost', database='mydatabase', user='user', password='passwd') cur = conn.cursor() return cur.execute("SELECT * FROM get_result(0);") # returns NoneType However, if i just do the regular query, I get the correct set of rows back: ... return cur.execute("SELECT my_table_id, my_table_value FROM mytable;") # returns iterable result set Theres obviously something wrong with my pl/pgsql function, but I can't seem to get it right. I also tried using RETURN result_set; instead of just RETURN in the 10th line of my plpgsql function, but got an error from postgres.

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >