Search Results

Search found 1922 results on 77 pages for 'postgresql contrib'.

Page 12/77 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Side effects of reordering columns in PostgreSQL

    - by Summer
    I sometimes re-order the columns in my Postgres DB. Since Postgres can only add columns at the end of tables, I end up re-ordering by adding new columns at the end of the table, setting them equal to existing columns, and then dropping the original columns. My question is: what does PostgreSQL do with the memory that's freed by dropped columns? Does it automatically re-use the memory, so a single record consumes the same amount of space as it did beforehand? But that would require a re-write of the whole table, so to avoid that, does it just keep a bunch of blank space around in each record? Thanks! ~S

    Read the article

  • PostgreSQL - select only when specific multiple apperance in column

    - by Horse SMith
    I'm using PostgreSQL. I have a table with 3 fields person, recipe and ingredient person = creator of the recipe recipe = the recipe ingredient = one of the ingredients in the recipe I want to create a query which results in every person who whenever has added carrot to a recipe, the person must also have added salt to the same recipe. More than one person can have created the recipe, in which case the person who added the ingredient will be credited for adding the ingredient. Sometimes the ingredient is used more than once, even by the same person. If this the table: person1, rec1, carrot person1, rec1, salt person1, rec1, salt person1, rec2, salt person1, rec2, pepper person2, rec1, carrot person2, rec1, salt person2, rec2, carrot person2, rec2, pepper person3, rec1, sugar person3, rec1, carrot Then I want this result: person1 Because this person is the only one who whenever has added carrot also have added salt.

    Read the article

  • PostgreSQL: How to index all foreign keys?

    - by biggusjimmus
    I am working with a large PostgreSQL database, and I are trying to tune it to get more performance. Our queries and updates seem to be doing a lot of lookups using foreign keys. What I would like is a relatively simple way to add Indexes to all of our foreign keys without having to go through every table (~140) and doing it manually. In researching this, I've come to find that there is no way to have Postgres do this for you automatically (like MySQL does), but I would be happy to hear otherwise there, too.

    Read the article

  • Having difficulty catching postgresql exception in sequel/ruby

    - by mhd
    I have postgresql table that has somekind of unique constraint. I have ruby script that will update this table. The ruby script should be able to switch to UPDATE instead of INSERT when this kind of error occured: PGError: ERROR: duplicate key value violates unique constraint CMIIW, sequel seems cannot catch this exception?? Anyway,sample code below is something that I would like to see to work but apparently not: @myarray.each do |x| fresh_ds = DB["INSERT INTO mytable (id, col_foo) values ('#{x}' ,'#{myhash[x]}')"] result = fresh_ds.insert catch Sequel::Error do fresh_ds = DB["UPDATE mytable set col_foo = '#{myhash[x]} where id = #{x}"] result = fresh_ds.update end end Maybe my ruby code is wrong or I missed something I don't know. Any solution? Thanks. UPDATE: code below works, the error is caught when unique constraint is violated. @myarray.each do |x| begin # INSERT CODE rescue Sequel::Error # UPDATE CODE end end

    Read the article

  • Rollback to a specific moment with PostgreSQL

    - by mada54
    Hi, Is there a way to rollback to a specific starting point. Im looking for something like this. Start specific_point; Now after this, an other application connected with the SAME login will insert & delete datas (webservices with crud operations) for about 2 minutes doing tests. Each webservice call is declared as a transaction with Spring Ws. After that i want to rollback to the specific_point to have a clean database to a known previous state. I was thinking that ROLLBACK TO SAVEPOINT foo; was the solution but not unfortunately? Any idea ? Configuration: PostgreSQL 8.4 / windows XP Regards

    Read the article

  • postgresql count

    - by dars
    Can this be done in PGSQL? I have a view which I created where hostname,ip, and datacenter are from one table, and ifdesc and if stats from another table. the view output looks like this: hostname | ip | datacenter | ifdesc | ifadminstat | ifoperstat| ---------- ------------------------------------------------------------------ r1 1.1.1.1 dc GigabitEthernet1/1 2 1 r1 1.1.1.1 dc GigabitEthernet1/2 2 2 r1 1.1.1.1 dc GigabitEthernet1/3 2 2 r1 1.1.1.1 dc GigabitEthernet1/4 2 1 r1 1.1.1.1 dc GigabitEthernet2/1 2 2 r1 1.1.1.1 dc GigabitEthernet2/2 2 2 r2 2.2.2.2 dc GigabitEthernet1/1 2 2 r2 2.2.2.2 dc GigabitEthernet1/2 2 2 I need to get a count of "ifadminstat = 2" and "ifoperstat = 2" for all interfaces on each blade, for each router (for example... for r1, how many interfaces on blade 1 (GigabitEthernet1/1-48) have "ifadminstat = 2" and "ifoperstat = 2". I am trying to do the counting in Postgresql then present the results on a website using PHP.

    Read the article

  • Postgresql sequences

    - by Dylan
    When I delete all records from a Postgresql table and then try to reset the sequence to start a new record with number 1 when it is inserted, i get different results : SELECT setval('tblname_id_seq', (SELECT COALESCE(MAX(id),1) FROM tblname)); This sets the current value of the sequence to 1, but the NEXT record (actually the first because there are no records yet) gets number 2! And I can't set it to 0, because the minimum value in the sequence is 1! When I use : ALTER SEQUENCE tblname_id_seq RESTART WITH 1; the first record that is inserted actually gets number 1 ! But the above code doesn't accept a SELECT as a value instead of 1. I wish to reset the sequence to number 1 when there are no records, and the first record then should start with 1. But when there ARE already records in the table, I want to reset the sequence so that the next record that is inserted will get {highest}+1 Does anyone have a clear solution for this?

    Read the article

  • move data from one table to another, postgresql edition

    - by IggShaman
    Hi All, I'd like to move some data from one table to another (with a possibly different schema). Straightforward solution that comes into mind is - start a transaction with serializable isolation level; INSERT INTO dest_table SELECT data FROM orig_table,other-tables WHERE <condition>; DELETE FROM orig_table USING other-tables WHERE <condition>; COMMIT; Now what if the amount of data is rather big, and the <condition> is expensive to compute? In PostgreSQL, a RULE or a stored procedure can be used to delete data on the fly, evaluating condition only once. Which solution is better? Are there other options?

    Read the article

  • Insert data into table effeciently, postgresql

    - by Rowan_Gaffney
    I am new to postgresql (and databases in general) and was hoping to get some pointers on improving the efficiency of the following statement. I am inserting data from one table to another, and do not want to insert duplicate values. I have a rid (unique identifier in each table) that are indexed and are Primary Keys. I am currently using the following statement: Insert INTO table1 SELECT * FROM table2 WHERE rid NOT IN (SELECT rid FROM table1). As of now the table one is 200,000 records, table2 is 20,000 records. Table1 is going to keep growing (probably to around 2,000,000) and table2 will stay around 20,000 records. As of now the statement takes about 15 minutes to run. I am concerned that as Table1 grows this is going to take way to long. Any suggestions?

    Read the article

  • How to apply a update after an inser or update POSTGRESQL Trigger

    - by user3718906
    How to apply an update after an insert or update in POSTGRESQL; I have got a table which has a field lastupdate; I want that field to be set up whenever the row is updated or when it was inserted. I tried this trigger, but It is not working! HELP!! CREATE OR REPLACE FUNCTION fn_update_profile() RETURNS TRIGGER AS $update_profile$ BEGIN IF (TG_OP = 'INSERT' OR TG_OP = 'UPDATE' ) THEN UPDATE profile SET lastupdate=now() where oid=OLD.oid; RETURN NULL; ELSEIF (TG_OP = 'DELETE') THEN RETURN NULL; END IF; RETURN NULL; -- result is ignored since this is an AFTER trigger END; $update_profile$ LANGUAGE plpgsql;

    Read the article

  • Windows Server 2008 can't start postgresql-x64-9.0 service: could not create any TCP/IP sockets

    - by Rob
    After rebooting a Windows Server 2008 machine to apply system updates, we recently we began having some issues running PostgreSQL 9.0. When we noticed the problem, we reverted the Windows updates, but the issue persists: From services.msc, attempting to start the postgresql-x64-9.0 service fails. Half-way through starting the progress bar becomes very slow, and eventually responds with error 1053; "the service did not respond in a timely fashion." Interestingly enough, bringing up the task manager shows multiple instances of postgres.exe have been started, and looking at the log file shows: 2011-02-10 14:44:02 ESTLOG: database system is ready to accept connections I then tried killing the processes, and starting via the command-line (as the user postgres), but I receive a different error: C:/Program Files/PostgreSQL/9.0/bin/pg_ctl.exe start -N "postgresql-x64-9.0" -D "F:/SHARE/postgres" -w waiting for server to start............................................................... pg_ctl: could not start server ESTWARNING: could not create listen socket for "192.168.0.101" ESTFATAL: could not create any TCP/IP sockets The log file again indicates that the database is ready to accept connections. Also, using netstat indicates that no other processes are using port 5432; I can't think of any other obvious reason that opening the listen socket might fail. Any help would be greatly appreciated.

    Read the article

  • Using clojure.contrib functions in slime REPL

    - by Tyler
    I want to use the functions in the clojure.contrib.trace namespace in slime at the REPL. How can I get slime to load them automatically? A related question, how can I add a specific namespace into a running repl? On the clojure.contrib API it describes usage like this: (ns my-namespace (:require clojure.contrib.trace)) But adding this to my code results in the file being unable to load with an "Unable to resolve symbol" error for any function from the trace package. I use leiningen 'lein swank' to start the ServerSocket and the project.clj file looks like this (defproject test-project "0.1.0" :description "Connect 4 Agent written in Clojure" :dependencies [[org.clojure/clojure "1.2.0-master-SNAPSHOT"] [org.clojure/clojure-contrib "1.2.0-SNAPSHOT"]] :dev-dependencies [[leiningen/lein-swank "1.2.0-SNAPSHOT"] [swank-clojure "1.2.0"]]) Everything seems up to date, i.e. 'lein deps' doesn't produce any changes. So what's up?

    Read the article

  • consistency of Trigger Procedure (before row trigger) Postgresql

    - by elgcom
    Using Postgresql. I try to use TRIGGER procedure to make some consistency check on INSERT. The question is ...... whether "BEFORE INSERT FOR EACH ROW" can make sure each row to insert "checked" and "inserted" one after another? do I need extra lock on table to survive from concurrent insert? check for new row1 - insert row1 - check for new row2 - insert row2 -- -- -- unexpired product name is unique. CREATE TABLE product ( "name" VARCHAR(100) NOT NULL, "expired" BOOLEAN NOT NULL ); CREATE OR REPLACE FUNCTION check_consistency() RETURNS TRIGGER AS $$ BEGIN IF EXISTS (SELECT * FROM product WHERE name=NEW.name AND expired='false') THEN RAISE EXCEPTION 'duplicated!!!'; END IF; RETURN NEW; END; $$ LANGUAGE plpgsql; CREATE TRIGGER trigger_check_consistency BEFORE INSERT ON product FOR EACH ROW EXECUTE PROCEDURE check_consistency(); -- INSERT INTO product VALUES("prod1", true); INSERT INTO product VALUES("prod1", false); INSERT INTO product VALUES("prod1", false); // exception! this is OK name | expired ============== p1 | true p1 | true p1 | false This is not OK name | expired ============== p1 | true p1 | false p1 | false or maybe I should ask, how can I use Trigger to implement "Primary" or "Unique" constraint-like SQL.

    Read the article

  • Adding Table Columns to a Group by clause - Ruby on Rails - Postgresql

    - by bgadoci
    I am trying to use Heroku and apparently Postgresql is a lot more strict than SQL for aggregate functions. When I am pushing to Heroku I am getting an error stating the below. On another question I asked I received some guidance that said I should just add the columns to my group by clause and I am not sure how to do that. See the full error below and the PostsControll#index. SELECT posts.*, count(*) as vote_total FROM "posts" INNER JOIN "votes" ON votes.post_id = posts.id GROUP BY votes.post_id ORDER BY created_at DESC LIMIT 5 OFFSET 0): PostsController def index @tag_counts = Tag.count(:group => :tag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @ugtag_counts = Ugtag.count(:group => :ugctag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @vote_counts = Vote.count(:group => :post_title, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes unless(params[:tag_name] || "").empty? conditions = ["tags.tag_name = ? ", params[:tag_name]] joins = [:tags, :votes] end @posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id", :order => "created_at DESC", :page => params[:page], :per_page => 5) @popular_posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id", :order => "vote_total DESC", :page => params[:page], :per_page => 3) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end

    Read the article

  • Porting Oracle Procedure to PostgreSQL

    - by Grasper
    I am porting an Oracle function into Postgres PGPLSQL.. I have been using this guide: http://www.postgresql.org/docs/8.1/static/plpgsql.html CREATE OR REPLACE PROCEDURE DATA_UPDATE (mission NUMBER, task NUMBER) AS BEGIN IF mission IS NOT NULL THEN UPDATE MISSION_OBJECTIVE MO SET (MO.MO_TKR_TOTAL_OFF_SCHEDULED, MO.MO_TKR_TOTAL_RECEIVERS) = (SELECT NVL(SUM(RR.TRQ_FUEL_OFFLOAD),0), NVL(SUM(RR.TRQ_NUMBER_RECEIVERS),0) FROM REFUELING_REQUEST RR, MISSION_REQUEST_PAIRING MRP WHERE MO.MSN_INT_ID = MRP.MSN_INT_ID AND MO.MO_INT_ID = MRP.MO_INT_ID AND MRP.REQ_INT_ID = RR.REQ_INT_ID) WHERE MO.MSN_INT_ID = mission AND MO.MO_INT_ID = task ; END IF ; COMMIT ; END ; I've got it this far: CREATE OR REPLACE FUNCTION DATA_UPDATE (NUMERIC, NUMERIC) RETURNS integer as ' DECLARE mission ALIAS for $1; task ALIAS for $2; BEGIN IF mission IS NOT NULL THEN UPDATE MISSION_OBJECTIVE MO SET (MO.MO_TKR_TOTAL_OFF_SCHEDULED, MO.MO_TKR_TOTAL_RECEIVERS) = (SELECT COALESCE(SUM(RR.TRQ_FUEL_OFFLOAD),0), COALESCE(SUM(RR.TRQ_NUMBER_RECEIVERS),0) FROM REFUELING_REQUEST RR, MISSION_REQUEST_PAIRING MRP WHERE MO.MSN_INT_ID = MRP.MSN_INT_ID AND MO.MO_INT_ID = MRP.MO_INT_ID AND MRP.REQ_INT_ID = RR.REQ_INT_ID) WHERE MO.MSN_INT_ID = mission AND MO.MO_INT_ID = task ; END IF; COMMIT; END; ' LANGUAGE plpgsql; This is the error I get: ERROR: syntax error at or near "SELECT" LINE 1: ...OTAL_OFF_SCHEDULED, MO.MO_TKR_TOTAL_RECEIVERS) = (SELECT COA... I do not know why this isn't working... any ideas?

    Read the article

  • PostgreSQL - Why are some queries on large datasets so incredibly slow

    - by Brad Mathews
    Hello, I have two types of queries I run often on two large datasets. They run much slower than I would expect them to. The first type is a sequential scan updating all records: Update rcra_sites Set street = regexp_replace(street,'/','','i') rcra_sites has 700,000 records. It takes 22 minutes from pgAdmin! I wrote a vb.net function that loops through each record and sends an update query for each record (yes, 700,000 update queries!) and it runs in less than half the time. Hmmm.... The second type is a simple update with a relation and then a sequential scan: Update rcra_sites as sites Set violations='No' From narcra_monitoring as v Where sites.agencyid=v.agencyid and v.found_violation_flag='N' narcra_monitoring has 1,700,000 records. This takes 8 minutes. The query planner refuses to use my indexes. The query runs much faster if I start with a set enable_seqscan = false;. I would prefer if the query planner would do its job. I have appropriate indexes, I have vacuumed and analyzed. I optimized my shared_buffers and effective_cache_size best I know to use more memory since I have 4GB. My hardware is pretty darn good. I am running v8.4 on Windows 7. Is PostgreSQL just this slow? Or am I still missing something? Thanks! Brad

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • correct way to create a pivot table in postgresql using CASE WHEN

    - by mojones
    I am trying to create a pivot table type view in postgresql and am nearly there! Here is the basic query: select acc2tax_node.acc, tax_node.name, tax_node.rank from tax_node, acc2tax_node where tax_node.taxid=acc2tax_node.taxid and acc2tax_node.acc='AJ012531'; And the data: acc | name | rank ----------+-------------------------+-------------- AJ012531 | Paromalostomum fusculum | species AJ012531 | Paromalostomum | genus AJ012531 | Macrostomidae | family AJ012531 | Macrostomida | order AJ012531 | Macrostomorpha | no rank AJ012531 | Turbellaria | class AJ012531 | Platyhelminthes | phylum AJ012531 | Acoelomata | no rank AJ012531 | Bilateria | no rank AJ012531 | Eumetazoa | no rank AJ012531 | Metazoa | kingdom AJ012531 | Fungi/Metazoa group | no rank AJ012531 | Eukaryota | superkingdom AJ012531 | cellular organisms | no rank What I am trying to get is the following: acc | species | phylum AJ012531 | Paromalostomum fusculum | Platyhelminthes I am trying to do this with CASE WHEN, so I've got as far as the following: select acc2tax_node.acc, CASE tax_node.rank WHEN 'species' THEN tax_node.name ELSE NULL END as species, CASE tax_node.rank WHEN 'phylum' THEN tax_node.name ELSE NULL END as phylum from tax_node, acc2tax_node where tax_node.taxid=acc2tax_node.taxid and acc2tax_node.acc='AJ012531'; Which gives me the output: acc | species | phylum ----------+-------------------------+----------------- AJ012531 | Paromalostomum fusculum | AJ012531 | | AJ012531 | | AJ012531 | | AJ012531 | | AJ012531 | | AJ012531 | | Platyhelminthes AJ012531 | | AJ012531 | | AJ012531 | | AJ012531 | | AJ012531 | | AJ012531 | | AJ012531 | | Now I know that I have to group by acc at some point, so I try select acc2tax_node.acc, CASE tax_node.rank WHEN 'species' THEN tax_node.name ELSE NULL END as sp, CASE tax_node.rank WHEN 'phylum' THEN tax_node.name ELSE NULL END as ph from tax_node, acc2tax_node where tax_node.taxid=acc2tax_node.taxid and acc2tax_node.acc='AJ012531' group by acc2tax_node.acc; But I get the dreaded ERROR: column "tax_node.rank" must appear in the GROUP BY clause or be used in an aggregate function All the previous examples I've been able to find use something like SUM() around the CASE statements, so I guess that is the aggregate function. I have tried using FIRST(): select acc2tax_node.acc, FIRST(CASE tax_node.rank WHEN 'species' THEN tax_node.name ELSE NULL END) as sp, FIRST(CASE tax_node.rank WHEN 'phylum' THEN tax_node.name ELSE NULL END) as ph from tax_node, acc2tax_node where tax_node.taxid=acc2tax_node.taxid and acc2tax_node.acc='AJ012531' group by acc2tax_node.acc; but get the error: ERROR: function first(character varying) does not exist Can anyone offer any hints?

    Read the article

  • PostgreSQL insert on primary key failing with contention, even at serializable level

    - by Steven Schlansker
    I'm trying to insert or update data in a PostgreSQL db. The simplest case is a key-value pairing (the actual data is more complicated, but this is the smallest clear example) When you set a value, I'd like it to insert if the key is not there, otherwise update. Sadly Postgres does not have an insert or update statement, so I have to emulate it myself. I've been working with the idea of basically SELECTing whether the key exists, and then running the appropriate INSERT or UPDATE. Now clearly this needs to be be in a transaction or all manner of bad things could happen. However, this is not working exactly how I'd like it to - I understand that there are limitations to serializable transactions, but I'm not sure how to work around this one. Here's the situation - ab: => set transaction isolation level serializable; a: => select count(1) from table where id=1; --> 0 b: => select count(1) from table where id=1; --> 0 a: => insert into table values(1); --> 1 b: => insert into table values(1); --> ERROR: duplicate key value violates unique constraint "serial_test_pkey" Now I would expect it to throw the usual "couldn't commit due to concurrent update" but I'm guessing since the inserts are different "rows" this does not happen. Is there an easy way to work around this?

    Read the article

  • PostgreSQL - Error: SQL state: XX000.

    - by rob
    I have a table in Postgres that looks like this: CREATE TABLE "Population" ( "Id" bigint NOT NULL DEFAULT nextval('"population_Id_seq"'::regclass), "Name" character varying(255) NOT NULL, "Description" character varying(1024), "IsVisible" boolean NOT NULL CONSTRAINT "pk_Population" PRIMARY KEY ("Id") ) WITH ( OIDS=FALSE ); And a select function that looks like this: CREATE OR REPLACE FUNCTION "Population_SelectAll"() RETURNS SETOF "Population" AS $BODY$select "Id", "Name", "Description", "IsVisible" from "Population"; $BODY$ LANGUAGE 'sql' STABLE COST 100 Calling the select function returns all the rows in the table as expected. I have a need to add a couple of columns to the table (both of which are foreign keys to other tables in the database). This gives me a new table def as follows: CREATE TABLE "Population" ( "Id" bigint NOT NULL DEFAULT nextval('"population_Id_seq"'::regclass), "Name" character varying(255) NOT NULL, "Description" character varying(1024), "IsVisible" boolean NOT NULL, "DefaultSpeciesId" bigint NOT NULL, "DefaultEcotypeId" bigint NOT NULL, CONSTRAINT "pk_Population" PRIMARY KEY ("Id"), CONSTRAINT "fk_Population_DefaultEcotypeId" FOREIGN KEY ("DefaultEcotypeId") REFERENCES "Ecotype" ("Id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT "fk_Population_DefaultSpeciesId" FOREIGN KEY ("DefaultSpeciesId") REFERENCES "Species" ("Id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ) WITH ( OIDS=FALSE ); and function: CREATE OR REPLACE FUNCTION "Population_SelectAll"() RETURNS SETOF "Population" AS $BODY$select "Id", "Name", "Description", "IsVisible", "DefaultSpeciesId", "DefaultEcotypeId" from "Population"; $BODY$ LANGUAGE 'sql' STABLE COST 100 ROWS 1000; Calling the function after these changes results in the following error message: ERROR: could not find attribute 11 in subquery targetlist SQL state: XX000 What is causing this error and how do I fix it? I have tried to drop and recreate the columns and function - but the same error occurs. Platform is PostgreSQL 8.4 running on Windows Server. Thanks.

    Read the article

  • Extract data from PostgreSQL DB without using pg_dump

    - by John Horton
    There is a PostgreSQL database on which I only have limited access (e.g, I can't use pg_dump). I am trying to create a local "mirror" by exporting certain tables from the database. I do not have the permissions needed to just dump a table as SQL from within psql. Right now, I just have a Python script that iterates through my table_names, selects all fields and then exports them as a CSV: for table_name, file_name in zip(table_names, file_names): cmd = """echo "\\\copy (select * from %s)" to stdout WITH CSV HEADER | psql -d remote_db | gzip > ./%s/%s.gz"""%(table_name,dir_name,file_name) os.system(cmd) I would like to not use CSV if possible, as I lose the field types and the encoding can get messed up. First best would probably be some way of getting the generating SQL code for the table using \copy. Next best would be XML, ideally with some way of preserving the field types. If that doesn't work, I think the final option might be two queries---one to get the field data types, the other to get the actual data. Any thoughts or advice would be greatly appreciated - thanks!

    Read the article

  • Creating a function in Postgresql that does not return composite values

    - by celenius
    I'm learning how to write functions in Postgresql. I've defined a function called _tmp_myfunction() which takes in an id and returns a table (I also define a table object type called _tmp_mytable) -- create object type to be returned CREATE TYPE _tmp_mytable AS ( id integer, cost double precision ); -- create function which returns query CREATE OR REPLACE FUNCTION _tmp_myfunction( id integer ) RETURNS SETOF _tmp_mytable AS $$ BEGIN RETURN QUERY SELECT id, cost FROM sales WHERE id = sales.id; END; $$ LANGUAGE plpgsql; This works fine when I use one id and call it using the following approach: SELECT * FROM _tmp_myfunction(402); What I would like to be able to do is to call it, but to use a column of values instead of just one value. However, if I use the following approach I end up with all values of the table in one column, separated by commas: -- call function using all values in a column SELECT _tmp_myfunction(t.id) FROM transactions as t; I understand that I can get the same result if I use SELECT _tmp_myfunction(402); instead of SELECT * FROM _tmp_myfunction(402); but I don't know how to construct my query in such a way that I can separate out the results.

    Read the article

  • PostgreSQL storing paths for reference in scripts

    - by Brian D.
    I'm trying to find the appropriate place to store a system path in PostgreSQL. What I'm trying to do is load values into a table using the COPY command. However, since I will be referring to the same file path regularly I want to store that path in one place. I've tried creating a function to return the appropriate path, but I get a syntax error when I call the function in the COPY command. I'm not sure if this is the right way to go about it, but I'll post my code anyway. COPY command: COPY employee_scheduler.countries (code, name) FROM get_csv_path('countries.csv') WITH CSV; Function Definition: CREATE OR REPLACE FUNCTION employee_scheduler.get_csv_path(IN file_name VARCHAR(50)) RETURNS VARCHAR(250) AS $$ DECLARE path VARCHAR(200) := E'C:\\Brian\\Work\\employee_scheduler\\database\\csv\\'; file_path VARCHAR(250) := ''; BEGIN file_path := path || file_name; RETURN file_path; END; $$ LANGUAGE plpgsql; If anyone has a different idea on how to accomplish this I'm open to suggestions. Thanks for any help!

    Read the article

  • Postgresql Output column from another table

    - by muffin
    i'm using Postgresql, my question is specifically about querying from a table that's in another table and i'm really having trouble with this one. In fact, i'm absolutely mentally blocked. I'll try to define the relations of the tables as much as I can. I have a table entry which is like this: Each of the entries has a group_id; when they are 'advanced' to the next stage, the old entries are marked is_active = false, and a new assignment is done, so C & D are advanced stages of A & B. I have another table (which acts as a record keeper) , in which the storage_log_id refers to, this is the storage_log table : But then I have another table, to really find out where the entries are actually stored - storage table : To define my problem properly. Each entry has a storage_log_id (but some doesn't have yet), and a storage_log has a storage_id to refer to the actual table and find the storage label. The sql query i'm trying to do should output this one: Where The actual storage label is shown instead of the log id. This is so far what i've done: select e.id, e.group_id, e.name, e.stage, s.label from operational.entry e, operational.storage_log sl, operational.storage s where e.storage_log_id = sl.id and sl.storage_id = s.id But this just returns 3 rows, showing only the ones that have the seed_storage_log_id set; I should be able to see even those without logs, and especially the active ones. adding e.is_active = true to the condition makes the results empty. So, yeah i'm stuck. Need help, Thanks guys!

    Read the article

  • How do I set a postgresql password in pgpass.conf for the Administrator account on Windows Server 2008?

    - by brad
    I have a pgpass.conf file that works well for my default user. It is in C:/Users/myuser/AppData/Roaming/postgresql/pgpass.conf. It reads like so; localhost:5432:*:postgres:password1 I have a process that runs under the Administrator account. When I run whoami under this process I get nt authority/system. I want to be able to access the database from this process but it gets stuck because it needs a password. I have tried putting the above pgpass.conf into C:/Users/Administrator/AppData/postgresql/pgpass.conf and C:/Users/Administrator/AppData/Roaming/postgresql/pgpass.conf but it does not work. Is this the correct place for this file? Am I even able to do this as the Administrator. Unfortunately I cannot change the user that this process runs under.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >