Search Results

Search found 1505 results on 61 pages for 'postgresql 9 2'.

Page 15/61 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • PostgreSQL triggers and passing parameters

    - by iandouglas
    This is a multi-part question. I have a table similar to this: CREATE TABLE sales_data ( Company character(50), Contract character(50), top_revenue_sum integer, top_revenue_sales integer, last_sale timestamp) ; I'd like to create a trigger for new inserts into this table, something like this: CREATE OR REPLACE FUNCTION add_contract() RETURNS VOID DECLARE myCompany character(50), myContract character(50), BEGIN myCompany = TG_ARGV[0]; myContract = TG_ARGV[1]; IF (TG_OP = 'INSERT') THEN EXECUTE 'CREATE TABLE salesdata_' || $myCompany || '_' || $myContract || ' ( sale_amount integer, updated TIMESTAMP not null, some_data varchar(32), country varchar(2) ) ;' EXECUTE 'CREATE TRIGGER update_sales_data BEFORE INSERT OR DELETE ON salesdata_' || $myCompany || '_' || $myContract || ' FOR EACH ROW EXECUTE update_sales_data( ' || $myCompany || ',' || $myContract || ', revenue);' ; END IF; END; $add_contract$ LANGUAGE plpgsql; CREATE TRIGGER add_contract AFTER INSERT ON sales_data FOR EACH ROW EXECUTE add_contract() ; Basically, every time I insert a new row into sales_data, I want to generate a new table where the name of the table will be defined as something like "salesdata_Company_Contract" So my first question is how can I pass the Company and Contract data to the trigger so it can be passed to the add_contract() stored procedure? From my stored procedure, you'll see that I also want to update the original sales_data table whenever new data is inserted into the salesdata_Company_Contract table. This trigger will do something like this: CREATE OR REPLACE FUNCTION update_sales_data() RETURNS trigger as $update_sales_data$ DECLARE myCompany character(50) NOT NULL, myContract character(50) NOT NULL, myRevenue integer NOT NULL BEGIN myCompany = TG_ARGV[0] ; myContract = TG_ARGV[1] ; myRevenue = TG_ARGV[2] ; IF (TG_OP = 'INSERT') THEN UPDATE sales_data SET top_revenue_sales = top_revenue_sales + 1, top_revenue_sum = top_revenue_sum + $myRevenue, updated = now() WHERE Company = $myCompany AND Contract = $myContract ; ELSIF (TG_OP = 'DELETE') THEN UPDATE sales_data SET top_revenue_sales = top_revenue_sales - 1, top_revenue_sum = top_revenue_sum - $myRevenue, updated = now() WHERE Company = $myCompany AND Contract = $myContract ; END IF; END; $update_sales_data$ LANGUAGE plpgsql; This will, of course, require that I pass several parameters around within these stored procedures and triggers, and I'm not sure (a) if this is even possible, or (b) practical, or (c) best practice and we should just put this logic into our other software instead of asking the database to do this work for us. To keep our table sizes down, as we'll have hundreds of thousands of transactions per day, we've decided to partition our data using the Company and Contract strings as part of the table names themselves so they're all very small in size; file IO for us is faster and we felt we'd get better performance. Thanks for any thoughts or direction. My thinking, now that I've written all of this out, is that maybe we need to write stored procedures where we pass our insert data as parameters, and call that from our other software, and have the stored procedure do the insert into "sales_data" then create the other table. Then, have a second stored procedure to insert new data into the salesdata_Company_Contract tables, where the table name is passed to the stored proc as a parameter, and again have that stored proc do the insert, then update the main sales_data table afterward. What approach would you take?

    Read the article

  • PostGreSQL - bind variables and date addition

    - by azp74
    I need to update some timestamp columns in a table in a PostGres (8.3) database. My query (simplified) look like this: update table1 set dateA = dateA + interval '10 hours' where id = 1234; This is part of a script and there's a lot to update so my preference is to use bind variables, rather than to have to build the query string each time. This means my query becomes: update table1 set dateA = dateA + interval '? hours' where id = ?; When I do this, the complaint is that I've supplied 2 bind variables when only one is required. If I try to put the ? outside the quote marks: update table1 set dateA = dateA + interval ? ' hours' where id = ?; I get: ... syntax error at or near "' hours'" It looks as though the query has been interpreted as ... dateA = dateA + interval '10' ' hours' ... I can't find anything in the documentation to help ... any suggestions? Thanks

    Read the article

  • How large is a "buffer" in PostgreSQL

    - by Konrad Garus
    I am using pg_buffercache module for finding hogs eating up my RAM cache. For example when I run this query: SELECT c.relname, count(*) AS buffers FROM pg_buffercache b INNER JOIN pg_class c ON b.relfilenode = c.relfilenode AND b.reldatabase IN (0, (SELECT oid FROM pg_database WHERE datname = current_database())) GROUP BY c.relname ORDER BY 2 DESC LIMIT 10; I discover that sample_table is using 120 buffers. How much is 120 buffers in bytes?

    Read the article

  • PostgreSQL pgdb driver raises "can't rollback" exception

    - by David Parunakian
    Hello, for some reason I'm experiencing the Operational Error with "can't rollback" message when I attempt to roll back my transaction in the following context: try: cursors[instance].execute("lock revision, app, timeout IN SHARE MODE") cursors[instance].execute("insert into app (type, active, active_revision, contents, z) values ('session', true, %s, %s, 0) returning id", (cRevision, sessionId)) sAppId = cursors[instance].fetchone()[0] cursors[instance].execute("insert into revision (app_id, type) values (%s, 'active')", (sAppId,)) cursors[instance].execute("insert into timeout (app_id, last_seen) values (%s, now())", (sAppId,)) connections[instance].commit() except pgdb.DatabaseError, e: connections[instance].rollback() return "{status: 'error', errno:4, errmsg: \"%s\"}"%(str(e).replace('\"', '\\"').replace('\n', '\\n').replace('\r', '\\r')) The driver in use is PGDB. What is fundamentally wrong here?

    Read the article

  • PostgreSQL function question

    - by maxxtack
    CREATE FUNCTION foo() RETURNS text LANGUAGE plperl AS $$ return 'foo'; $$; CREATE FUNCTION foobar() RETURNS text LANGUAGE plperl AS $$ return foo() . 'bar'; $$; I'm trying to compose results using multiple functions, but when i call foobar() i get an empty result.

    Read the article

  • Deadlock between select and truncate (postgresql)

    - by valodzka
    Table output_values_center1 (and some other) inherits output_values. Periodically I truncate table output_values_center1 and load new data (in one transaction). In that time user can request some data and he got error message. Why it ever happens (select query requests only one record) and how to avoid such problem: 2010-05-19 14:43:17 UTC ERROR: deadlock detected 2010-05-19 14:43:17 UTC DETAIL: Process 25972 waits for AccessShareLock on relation 2495092 of database 16385; blocked by process 26102. Process 26102 waits for AccessExclusiveLock on relation 2494865 of database 16385; blocked by process 25972. Process 25972: SELECT * FROM "output_values" WHERE ("output_values".id = 122312) LIMIT 1 Process 26102: TRUNCATE TABLE "output_values_center1"

    Read the article

  • Index an array expression directly in PostgreSQL

    - by wich
    I'm trying to insert data into a table from a template table. I need to rewrite one of the columns for which I wanted to use a directly indexed array expression, but I can't seem to find how to do this, if it is even possible. The scenario: create table template ( id integer, index integer, foo integer); insert into template values (0, 1, 23), (0, 2, 18), (0, 3, 16), (0, 4, 7), (1, 1, 17), (1, 2, 26), (1, 3, 11), (1, 4, 3); create table data ( data_id integer, foo integer); Now what I'd like to do is the following: insert into data select (array[3,7,5,2])[index], foo from template where id = 1; But this doesn't work, the (array[3,7,5,2])[index] syntax isn't valid. I tried a few variants, but was unable to get anything working and wasn't able to find the correct syntax in the docs, nor even whether this is at all possible or not. As a current workaround I've devised the following, but it is less than ideal, from an elegance perspective at least, but it may also be a performance hit, I haven't looked into that yet. insert into data select arr[index], foo from template, (select array[3,7,5,2] as arr) as q where id = 1; If anyone could suggest a (better) alternative to accomplish this I'd like to hear that as well.

    Read the article

  • Django paging object has issues with Postgresql QuerySets

    - by pivotal
    I have some django code that runs fine on a SQLite database or on a MySQL database, but it runs into problems with Postgres, and it's making me crazy that no one has has this issue before. I think it may also be related to the way querysets are evaluated by the pager. In a view I have: def index(request, page=1): latest_posts = Post.objects.all().order_by('-pub_date') paginator = Paginator(latest_posts, 5) try: posts = paginator.page(page) except (EmptyPage, InvalidPage): posts = paginator.page(paginator.num_pages) return render_to_response('blog/index.html', {'posts' : posts}) And inside the template: {% for post in posts.object_list %} {# some rendering jazz #} {% endfor %} This works fine with SQLite, but Postgres gives me: Caught TypeError while rendering: 'NoneType' object is not callable To further complicate things, when I switch the Queryset call to: latest_posts = Post.objects.all() Everything works great. I've tried re-reading the documentation, but found nothing, although I admit I'm a bit clouded by frustration at this point. What am I missing? Thanks in advance.

    Read the article

  • Query a column and a calculation of columns at the same time PostgreSQL

    - by pablo
    Hi I have two tables, Products and BundleProducts that have o2o relation with BaseProducts. A BundleProduct is a collection of Products using a m2m realtion to the Products table. Products has a price column and the price of a BundleProduct is calculated as the sum of the prices of its Products. BaseProducts have columns like name and description so I can query it to get both Products and BundleProducts. Is it possible to query and sort by price both for the price column of the Products and calculated price of the BundleProducts? Thanks

    Read the article

  • Calculating and saving space in Postgresql

    - by punkish
    I have a table in Pg like so CREATE TABLE t ( a BIGSERIAL NOT NULL, -- 8 b b SMALLINT, -- 2 b c SMALLINT, -- 2 b d REAL, -- 4 b e REAL, -- 4 b f REAL, -- 4 b g INTEGER, -- 4 b h REAL, -- 4 b i REAL, -- 4 b j SMALLINT, -- 2 b k INTEGER, -- 4 b l INTEGER, -- 4 b m REAL, -- 4 b CONSTRAINT a_pkey PRIMARY KEY (a) ) The above adds up to 50 bytes per row. My experience is that I need another 40% to 50% for system overhead, without even any user-created indexes to the above. So, about 75 bytes per row. I will have many, many rows in the table, potentially upward of 145 billion rows, so the table is going to be pushing 13-14 Terabytes. What tricks, if any, could I use to compact this table? My possible ideas below -- Convert the REAL values to INTEGERs. If they can stored as SMALLINT, that is a saving of 2 bytes per field. Convert the columns b .. m into an array. I don't need to search on those columns, but I do need to be able to return one column's value at a time. So, if I need column g, I could do something like SELECT a, arr[5] FROM t; Would I save space with the array option? Would there be a speed penalty? Any other ideas?

    Read the article

  • PostgreSQL: return select count(*) from old_ids;

    - by Alexander Farber
    Hello, please help me with 1 more PL/pgSQL question. I have a PHP-script run as daily cronjob and deleting old records from 1 main table and few further tables referencing its "id" column: create or replace function quincytrack_clean() returns integer as $BODY$ begin create temp table old_ids (id varchar(20)) on commit drop; insert into old_ids select id from quincytrack where age(QDATETIME) > interval '30 days'; delete from hide_id where id in (select id from old_ids); delete from related_mks where id in (select id from old_ids); delete from related_cl where id in (select id from old_ids); delete from related_comment where id in (select id from old_ids); delete from quincytrack where id in (select id from old_ids); return select count(*) from old_ids; end; $BODY$ language plpgsql; And here is how I call it from the PHP script: $sth = $pg->prepare('select quincytrack_clean()'); $sth->execute(); if ($row = $sth->fetch(PDO::FETCH_ASSOC)) printf("removed %u old rows\n", $row['count']); Why do I get the following error? SQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at or near "select" at character 9 QUERY: SELECT select count(*) from old_ids CONTEXT: SQL statement in PL/PgSQL function "quincytrack_clean" near line 23 Thank you! Alex

    Read the article

  • Postgresql 9.1: ERROR: type "citext" does not exist

    - by gotuskar
    I am trying to execute following query through PgAdmin utility. CREATE TABLE svcr."EventLogs" ("eventId" BIGINT NOT NULL, "eventTime" TIMESTAMP WITH TIME ZONE NOT NULL, "userid" CITEXT, "realmid" CITEXT NOT NULL, "onUserid" CITEXT, "description" TEXT, CONSTRAINT eventlogs_pkey PRIMARY KEY ("eventId")); And I get following error - ERROR: type "citext" does not exist SQL state: 42704 Character: 120 However, following query runs fine - CREATE TABLE svcr."CategoryMap" ("category" INT NOT NULL, "userData" INT NOT NULL); What is wrong with the first query?

    Read the article

  • Calculating correlation coefficient using PostgreSQL?

    - by Dave
    I have worked out how to calculate the correlation coefficient between two fields if both are in the same table: SELECT corr(column1, column2) FROM table WHERE <my filters>; ...but I can't work out how to do it when the columns are from different tables (I need to apply the same filters to both tables). Any hints, please?

    Read the article

  • PostgreSQL: keep a certain number of records in a table

    - by Alexander Farber
    Hello, I have an SQL-table holding the last hands received by a player in card game. The hand is represented by an integer (32 bits == 32 cards): create table pref_hand ( id varchar(32) references pref_users, hand integer not NULL check (hand > 0), stamp timestamp default current_timestamp ); As the players are playing constantly and that data isn't important (just a gimmick to be displayed at player profile pages) and I don't want my database to grow too quickly, I'd like to keep only up to 10 records per player id. So I'm trying to declare this PL/PgSQL procedure: create or replace function pref_update_game(_id varchar, _hand integer) returns void as $BODY$ begin delete from pref_hand offset 10 where id=_id order by stamp; insert into pref_hand (id, hand) values (_id, _hand); end; $BODY$ language plpgsql; but unfortunately this fails with: ERROR: syntax error at or near "offset" because delete doesn't support offset. Does anybody please have a better idea here? Thank you! Alex

    Read the article

  • Dynamic upsert in postgresql

    - by Daniel
    I have this upsert function that allows me to modify the fill_rate column of a row. CREATE FUNCTION upsert_fillrate_alarming(integer, boolean) RETURNS VOID AS ' DECLARE num ALIAS FOR $1; dat ALIAS FOR $2; BEGIN LOOP -- First try to update. UPDATE alarming SET fill_rate = dat WHERE equipid = num; IF FOUND THEN RETURN; END IF; -- Since its not there we try to insert the key -- Notice if we had a concurent key insertion we would error BEGIN INSERT INTO alarming (equipid, fill_rate) VALUES (num, dat); RETURN; EXCEPTION WHEN unique_violation THEN -- Loop and try the update again END; END LOOP; END; ' LANGUAGE 'plpgsql'; Is it possible to modify this function to take a column argument as well? Extra bonus points if there is a way to modify the function to take a column and a table.

    Read the article

  • Postgresql constraint

    - by Ryan
    I cannot seem to get this right, I am trying to modify a field to be a foreign key, with cascading delete... what am i doing wrong? ALTER TABLE my_table ADD CONSTRAINT $4 FOREIGN KEY my_field REFERENCES my_foreign_table ON DELETE CASCADE;

    Read the article

  • Why is this postgresql query so slow?

    - by user315975
    I'm no database expert, but I have enough knowledge to get myself into trouble, as is the case here. This query SELECT DISTINCT p.* FROM points p, areas a, contacts c WHERE ( p.latitude > 43.6511659465 AND p.latitude < 43.6711659465 AND p.longitude > -79.4677941889 AND p.longitude < -79.4477941889) AND p.resource_type = 'Contact' AND c.user_id = 6 is extremely slow. The points table has fewer than 2000 records, but it takes about 8 seconds to execute. There are indexes on the latitude and longitude columns. Removing the clause concering the resource_type and user_id make no difference. The latitude and longitude fields are both formatted as number(15,10) -- I need the precision for some calculations. There are many, many other queries in this project where points are compared, but no execution time problems. What's going on?

    Read the article

  • postgresql duplicate table names best practice

    - by veilig
    My company has a handful of apps that we deploy in the websites we build. Recently a very old app needed to be included along side a newer app and there was a conflict w/ a duplicate table name needed to be used by both apps. We are now in the process of updating an old app and there will be some DB updates. I'm curious what people consider best practice (or how do you do it) to help ensure these name collisions don't happen. I've looked at schema's but not sure if thats the right path we want to take. As the documentation prescribes, I don't want to "wire" a particular schema name into an application and if I add schema's to the user search path how would it know which table I was referring to if two schema's have the same table name. although, maybe I'm reading to much into this. Any insights or words of wisdom would be greatly appreciated!

    Read the article

  • efficiently trimming postgresql tables

    - by agilefall
    I have about 10 tables with over 2 million records and one with 30 million. I would like to efficiently remove older data from each of these tables. My general algorithm is: create a temp table for each large table and populate it with newer data truncate the original tables copy tmp data back to original tables using: "insert into originaltable (select * from tmp_table)" However, the last step of copying the data back is taking longer than I'd like. I thought about deleting the original tables and making the temp tables "permanent", but I lose constraint/foreign key info. If I delete from the tables directly, it takes much longer. Given that I need to preserve all foreign keys and constraints, are there any faster ways of removing the older data? Thanks.

    Read the article

  • PostgreSQL function to iterate through/act on many rows with state

    - by Claudiu
    I have a database with columns looking like: session | order | atype | amt --------+-------+-------+----- 1 | 0 | ADD | 10 1 | 1 | ADD | 20 1 | 2 | SET | 35 1 | 3 | ADD | 10 2 | 0 | SET | 30 2 | 1 | ADD | 20 2 | 2 | SET | 55 It represents actions happening. Each session starts at 0. ADD adds an amount, while SET sets it. I want a function to return the end value of a session, e.g. SELECT session_val(1); --returns 45 SELECT session_val(2); --returns 55 Is it possible to write such a function/query? I don't know how to do any iteration-like things with SQL, or if it's possible at all.

    Read the article

  • Non-auto-increment rails/postgresql column

    - by Redian
    I'm trying to have a model/table with duplicate information in it. The reason for this is so that the same data can be written to the table under different users and found for each user. However, I want a quick easy way to identify which information is a duplicate of other information. I think the best way to do this would be to have an item_id of sorts that increments with each "set" of entries to the table. Is there a way to do this without including another table that stores the information without attributing it to users?

    Read the article

  • String literals and escape characters in postgresql

    - by rjohnston
    Attempting to insert an escape character into a table results in a warning. For example: create table EscapeTest (text varchar(50)); insert into EscapeTest (text) values ('This is the first part \n And this is the second'); Produces the warning: WARNING: nonstandard use of escape in a string literal (Using PSQL 8.2) Anyone know how to get around this?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >