Search Results

Search found 1505 results on 61 pages for 'postgresql 9 1'.

Page 14/61 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Return pre-UPDATE column values in PostgreSQL without using triggers, functions or other "magic"

    - by Python Larry
    I have a related question, but this is another part of MY puzzle. I would like to get the OLD VALUE of a Column from a Row that was UPDATEd... WITHOUT using Triggers (nor Stored Procedures, nor any other extra, non-SQL/-query entities). The query I have is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) RETURNING row_id If I could do "FOR UPDATE ON my_table" at the end of the subquery, that'd be devine (and fix my other question/problem). But, that won't work: can't have this AND a "GROUP BY" (which is necessary for figuring out the COUNT of trans_nbr's). Then I could just take those trans_nbr's and do a query first to get the (soon-to-be-) former processing_by values. I've tried doing like: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table old_my_table JOIN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) sub_my_table ON old_my_table.trans_nbr = sub_my_table.trans_nbr WHERE my_table.trans_nbr = sub_my_table.trans_nbr AND my_table.processing_by = old_my_table.processing_by RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by But that can't work; "old_my_table" is not viewable outside of the join; the RETURNING clause is blind to it. I've long since lost count of all the attempts I've made; I have been researching this for literally hours. If I could just find a bullet-proof way to lock the rows in my subquery - and ONLY those rows, and WHEN the subquery happens - all the concurrency issues I'm trying to avoid disappear... UPDATE: [WIPES EGG OFF FACE] Okay, so I had a typo in the non-generic code of the above that I wrote "doesn't work"; it does... thanks to Erwin Brandstetter, below, who stated it would, I re-did it (after a night's sleep, refreshed eyes, and a banana for bfast). Since it took me so long/hard to find this sort of solution, perhaps my embarrassment is worth it? At least this is on SO for posterity now... : What I now have (that works) is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table AS old_my_table WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(*) > 1 LIMIT our_limit_to_have_single_process_grab ) AND my_table.row_id = old_my_table.row_id RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by AS old_processing_by The COUNT(*) is per a suggestion from Flimzy in a comment on my other (linked above) question. (I was more specific than necessary. [In this instance.])

    Read the article

  • Schemas and tables versus user-ids in a single table using PostgreSQL

    - by gvkv
    I'm developing a web app and I've come to a fork in the road with respect to database structure and I don't know which direction to take. I have a database with user information that I can structure one of two ways. The first is to create a schema and a set of tables for each user (duplicating the structure for each user) and the second is to create a single set of tables and query information based on user-id. Suppose 100000 users. Here are my questions: Considering security, performance, scalability and administration where does each choice lie? Would the answers change for 1000000 or 10000? Is there a set of best practices that lead to one choice or the other? It seems to me that multiple schemas are more secure since it's trivial to restrict user privileges but what about performance and scalability? Administration seems like a wash since dumping (and restoring) lots of schemas isn't any more difficult than dumping a few.

    Read the article

  • Encoding error PostgreSQL 8.4

    - by KandadaBoggu
    I am importing data from a CSV file. One of the fields has an accent(Telefónica O2 UK Limited). The application throws en error while inserting the data to the table. PGError: ERROR: invalid byte sequence for encoding "UTF8": 0xf36e6963 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". : INSERT INTO "companies" ("name", "validated") VALUES(E'Telef?nica O2 UK Limited', 't') How do I workaround this issue?

    Read the article

  • PostgreSql XML Text search

    - by cro
    I have a text column in a table. We store XML in this column. Now I want to search for tags and values Example data: Citi Bank ..... ..... / I would like to run the following query: select * from xxxx where to_tsvector('english',xml_column) @@ to_tsquery('Citi Bank') This works fine but it also works for tags like name1 or no tag. How do I have to setup my search in order for this to work so I get an exact match for the tag and value ?

    Read the article

  • add schema to path in postgresql

    - by veilig
    I'm the process of moving applications over from all in the public schema to each having their own schema. for each application, I have a small script that will create the schema and then create the tables,functions,etc... to that schema. Is there anyway to automatically add a newly created schema to the search_path? Currently, the only way I see is to find the users current path SHOW search_path; and then add the new schema to it SET search_path to xxx,yyy,zzz; I would like some way to just say, append schema zzz to the users_search path. is this possible?

    Read the article

  • PostgreSQL 8.3 data types: xml vs varchar

    - by Sejanus
    There's xml data type in Postgres, I never used it before so I'd like to hear opinions. Downsides and upsides vs using regular varchar (or Text) column to store xml. The text I'm going to store is xml, well-formed, UTF-8. No need to search by it (I've read searching by xml is slow). This XML actually is data prepared for PDF generation with Apache FOP. XML can be generated dynamically from data found elsewhere (other Postgres tables), it's stored as is only so that I won't need to generate it twice. Kinda backup#2 for already generated PDF documents. Anything else to know? Good practices, performance, maintenance, etc?

    Read the article

  • Using outer query result in a subquery in postgresql

    - by brad
    I have two tables points and contacts and I'm trying to get the average points.score per contact grouped on a monthly basis. Note that points and contacts aren't related, I just want the sum of points created in a month divided by the number of contacts that existed in that month. So, I need to sum points grouped by the created_at month, and I need to take the count of contacts FOR THAT MONTH ONLY. It's that last part that's tricking me up. I'm not sure how I can use a column from an outer query in the subquery. I tried something like this: SELECT SUM(score) AS points_sum, EXTRACT(month FROM created_at) AS month, date_trunc('MONTH', created_at) + INTERVAL '1 month' AS next_month, (SELECT COUNT(id) FROM contacts WHERE contacts.created_at <= next_month) as contact_count FROM points GROUP BY month, next_month ORDER BY month So, I'm extracting the actual month that my points are being summed, and at the same time, getting the beginning of the next_month so that I can say "Get me the count of contacts where their created at is < next_month" But it complains that column next_month doesn't exist This is understandable as the subquery knows nothing about the outer query. Qualifying with points.next_month doesn't work either. So can someone point me in the right direction of how to achieve this? Tables: Points score | created_at 10 | "2011-11-15 21:44:00.363423" 11 | "2011-10-15 21:44:00.69667" 12 | "2011-09-15 21:44:00.773289" 13 | "2011-08-15 21:44:00.848838" 14 | "2011-07-15 21:44:00.924152" Contacts id | created_at 6 | "2011-07-15 21:43:17.534777" 5 | "2011-08-15 21:43:17.520828" 4 | "2011-09-15 21:43:17.506452" 3 | "2011-10-15 21:43:17.491848" 1 | "2011-11-15 21:42:54.759225" sum, month and next_month (without the subselect) sum | month | next_month 14 | 7 | "2011-08-01 00:00:00" 13 | 8 | "2011-09-01 00:00:00" 12 | 9 | "2011-10-01 00:00:00" 11 | 10 | "2011-11-01 00:00:00" 10 | 11 | "2011-12-01 00:00:00"

    Read the article

  • General many-to-many relationship problem ( Postgresql )

    - by David
    Hi, i have two tables: CREATE TABLE "public"."auctions" ( "id" VARCHAR(255) NOT NULL, "auction_value_key" VARCHAR(255) NOT NULL, "ctime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, "mtime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, CONSTRAINT "pk_XXXX2" PRIMARY KEY("id"), ); and CREATE TABLE "public"."auction_values" ( "id" NUMERIC DEFAULT nextval('default_seq'::regclass) NOT NULL, "fk_auction_value_key" VARCHAR(255) NOT NULL, "key" VARCHAR(255) NOT NULL, "value" TEXT, "ctime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, "mtime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, CONSTRAINT "pk_XXXX1" PRIMARY KEY("id"), ); if i want to create a many-to-many relationship on the auction_value_key like this: ALTER TABLE "public"."auction_values" ADD CONSTRAINT "auction_values_fk" FOREIGN KEY ("fk_auction_value_key") REFERENCES "public"."auctions"("auction_value_key") ON DELETE NO ACTION ON UPDATE NO ACTION NOT DEFERRABLE; i get this SQL error: ERROR: there is no unique constraint matching given keys for referenced table "auctions" Question: As you might see, i want "auction_values" to be be "reused" by different auctions without duplicating them for every auction... So i don't want a key relation on the "id" field in the auctions table... Am i thinking wrong here or what is the deal? ;) Thanks

    Read the article

  • Python + PostgreSQL + strange ascii = UTF8 encoding error

    - by Claudiu
    I have ascii strings which contain the character "\x80" to represent the euro symbol: >>> print "\x80" € When inserting string data containing this character into my database, I get: psycopg2.DataError: invalid byte sequence for encoding "UTF8": 0x80 HINT: This error can also happen if the byte sequence does not match the encodi ng expected by the server, which is controlled by "client_encoding". I'm a unicode newbie. How can I convert my strings containing "\x80" to valid UTF-8 containing that same euro symbol? I've tried calling .encode and .decode on various strings, but run into errors: >>> "\x80".encode("utf-8") Traceback (most recent call last): File "<pyshell#14>", line 1, in <module> "\x80".encode("utf-8") UnicodeDecodeError: 'ascii' codec can't decode byte 0x80 in position 0: ordinal not in range(128)

    Read the article

  • PostgreSQL: SELECT all fields, filter some

    - by Adam Matan
    Hi, In one of our databases, there is a table with dozens of columns, one of which is a geometry column. I want to SELECT rows from the table, with the geometry transformed to another SRID. I want to use something like: `SELECT *` in order to avoid: SELECT col_a, col_b, col_c, col_d, col_e, col_f, col_g, col_h, transform(the_geom, NEW_SRID), ..., col_z Any ideas? Adam

    Read the article

  • Connection error with heroku db:push with postgresql

    - by Toby Hede
    I have suddenly started seeing this strange error when trying to push my database to heroku. > heroku db:push Auto-detected local database: postgres://infinity:infinity@localhost/infinity_development?encoding=utf8 Failed to connect to database: Sequel::DatabaseConnectionError -> TypeError wrong argument type String (expected Array) My app works fine - the credentials are all set locally.

    Read the article

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

  • PostGreSQL load increasing over time, why?

    - by TravisO
    It's a CentOS server (I don't know the specs) and just before anybody states the obvious, keep in mind these mitigating factors: the server does a nightly VACUUM job all the tables are indexed it's pretty much read only (meaning the DBs are not increasing in size) the number of queries being ran has been the same every month Here's a graph of the server load:

    Read the article

  • How to give weight to full matches over partial matches (PostgreSQL)

    - by kagaku
    I've got a query that takes an input searches for the closet match in zipcode/region/city/metrocode in a location table containing a few tens of thousands of entries (should be nearly every city in the US). The query I'm using is: select metrocode, region, postalcode, region_full, city from dv_location where ( region ilike '%Chicago%' or postalcode ilike '%Chicago%' or city ilike '%Chicago%' or region_full ilike'%Chicago%' ) and metrocode is not null Odd thing is, the results set I'm getting back looks like this: metrocode;region;postalcode;region_full;city 862;CA;95712;California;Chicago Park 862;CA;95712;California;Chicago Park 602;IL;60611;Illinois;Chicago 602;IL;60610;Illinois;Chicago What am I doing wrong? My thinking is that Chicago would have greater weight than Chicago Park since Chicago is an exact match to the term (even though I'm asking for a wildcard match on the term).

    Read the article

  • PostgreSQL - disabling constraints

    - by azp74
    I have a table with approx 5 million rows which has a fk constraint referencing the primary key of another table (also approx 5 million rows). I need to delete about 75000 rows from both tables. I know that if I try doing this with the fk constraint enabled it's going to take an unacceptable amount of time. Coming from an Oracle background my first thought was to disable the constraint, do the delete & then reenable the constraint. PostGres appears to let me disable constraint triggers if I am a super user (I'm not, but I am logging in as the user that owns/created the objects) but that doesn't seem to be quite what I want. The other option is to drop the constraint and then reinstate it. I'm worried that rebuilding the constraint is going to take ages given the size of my tables. Any thoughts?

    Read the article

  • Future proof Primary Key design in postgresql

    - by John P
    I've always used either auto_generated or Sequences in the past for my primary keys. With the current system I'm working on there is the possibility of having to eventually partition the data which has never been a requirement in the past. Knowing that I may need to partition the data in the future, is there any advantage of using UUIDs for PKs instead of the database's built-in sequences? If so, is there a design pattern that can safely generate relatively short keys (say 6 characters instead of the usual long one e6709870-5cbc-11df-a08a-0800200c9a66)? 36^6 keys per-table is more than sufficient for any table I could imagine. I will be using the keys in URLs so conciseness is important.

    Read the article

  • PostgreSQL - best way to return an array of key-value pairs

    - by Matt W
    I'm trying to select a number of fields, one of which needs to be an array with each element of the array containing two values. Each array item needs to contain a name (character varying) and an ID (numeric). I know how to return an array of single values (using the ARRAY keyword) but I'm unsure of how to return an array of an object which in itself contains two values. The query is something like SELECT t.field1, t.field2, ARRAY(--with each element containing two values i.e. {'TheName', 1 }) FROM MyTable t I read that one way to do this is by selecting the values into a type and then creating an array of that type. Problem is, the rest of the function is already returning a type (which means I would then have nested types - is that OK? If so, how would you read this data back in application code - i.e. with a .Net data provider like NPGSQL?) Any help is much appreciated.

    Read the article

  • Binding multiple arrays for WHERE IN in PostgreSQL

    - by Alec
    So I want to prepare a query something like: SELECT id FROM users WHERE (branch, cid) IN $1; But I then need to bind a variable length list of arrays like (('a','b'),('c','d')) to it. How do I go about doing this? I've tried using ANY but can't seem to get the syntax right. Cheers, Alec Edit: After some fiddling around, this is valid syntactically: SELECT id FROM users WHERE (branch, cid) = ANY ($1::text[][]); and then binding the string '{{a,b},{c,d}}' to $1 but throws the error "operator does not exist: record = text". Changing 'text' to 'record' then throws "input of anonymous composite types is not implemented". Any ideas?

    Read the article

  • Speeding up PostgreSQL query where data is between two dates

    - by Roger
    I have a large table ( 50m rows) which has some data with an ID and timestamp. I need to query the table to select all rows with a certain ID where the timestamp is between two dates, but it currently takes over 2 minutes on a high end machine. I'd really like to speed it up. I have found this tip which recommends using a spatial index, but the example it gives is for IP addresses. However, the speed increase (436s to 3s) is impressive. How can I use this with timestamps?

    Read the article

  • IN statement performance in PostgreSQL (and in general)

    - by Vasil
    I know this has probably been asked before, but I can't find it with SO's search. Lets say i've TABLE1 and TABLE2, who should I expect the performance of a query such as this: SELECT * FROM TABLE1 WHERE id IN SUBQUERY_ON_TABLE2; as the number of rows in TABLE1 and TABLE2 grow and id is a primary key on TABLE1. Yes, I know using IN is such a n00b mistake, but TABLE2 has a generic relation (django generic relation) to multiple other tables so I can't think of another way to filter the data. At what (aproximate) ammount of rows in TABLE1 and TABLE2 should I expect to notice performance issues because of this? Will performance degrade linearly, exponentially etc. depending on the number of rows?

    Read the article

  • postgresql help with php loop....

    - by KnockKnockWhosThere
    I keep getting an "Notice: Undefined index: did" error with this query, and I'm not understanding why... I'm much more used to mysql, so, maybe the syntax is wrong? This is the php query code: function get_demos() { global $session; $demo = array(); $result = pg_query("SELECT DISTINCT(did,vid,iid,value) FROM dv"); if(pg_num_rows($result) > 0) { while($r = pg_fetch_array($result)) { switch($r['did']) { case 1: $demo['a'][$r['vid']] = $r['value']; break; case 2: $demo['b'][$r['vid']] = $r['value']; break; case 3: $demo['c'][$r['vid']] = $r['value']; break; } } } else { $session->session_setMessage(2); } return $demo; } When I run that query at the pg prompt, I get results: "(1,1,1,"A")" "(1,2,2,"B")" "(1,3,3,"C")" "(1,4,4,"D")" "(1,5,5,"E")" "(1,6,6,"F")" "(1,7,7,"G")" "(1,8,8,"H")" "(1,9,9,"I")" "(1,10,A,"J")" "(1,11,B,"K")" "(1,12,C,"L")" "(1,13,D,"M")" "(2,14,1,"A")" "(2,15,2,"B")" "(2,16,0,"C")" "(3,17,1,"A")" "(3,18,2,"B")" "(3,19,3,"C")" "(3,20,4,"D")" "(3,21,5,"E")" "(3,22,6,"F")" "(3,23,7,"G")"

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >