Search Results

Search found 8664 results on 347 pages for 'rails postgresql'.

Page 222/347 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • Date arithmetic using integer values

    - by Dave Jarvis
    Problem String concatenation is slowing down a query: date(extract(YEAR FROM m.taken)||'-1-1') d1, date(extract(YEAR FROM m.taken)||'-1-31') d2 This is realized in code as part of a string, which follows (where the p_ variables are integers): date(extract(YEAR FROM m.taken)||''-'||p_month1||'-'||p_day1||''') d1, date(extract(YEAR FROM m.taken)||''-'||p_month2||'-'||p_day2||''') d2 This part of the query runs in 3.2 seconds with the dates, and 1.5 seconds without, leading me to believe there is ample room for improvement. Question What is a better way to create the date (presumably without concatenation)? Many thanks!

    Read the article

  • Inserting null fields with dbi:Pg

    - by User1
    I have a Perl script inserting data into Postgres according to a pipe delimited text file. Sometimes, a field is null (as expected). However, Perl makes this field into an empty string and the Postgres insert statement fails. Here's a snippet of code: use DBI; #Connect to the database. $dbh=DBI-connect('dbi:Pg:dbname=mydb','mydb','mydb',{AutoCommit=1,RaiseError=1,PrintError=1}); #Prepare an insert. $sth=$dbh-prepare("INSERT INTO mytable (field0,field1) SELECT ?,?"); while (<){ #Remove the whitespace chomp; #Parse the fields. @field=split(/\|/,$_); print "$_\n"; #Do the insert. $sth-execute($field[0],$field[1]); } And if the input is: a|1 b| c|3 EDIT: Use this input instead. a|1|x b||x c|3|x It will fail at b|. DBD::Pg::st execute failed: ERROR: invalid input syntax for integer: "" I just want it to insert a null on field1 instead. Any ideas? EDIT: I simplified the input at the last minute. The old input actually made it work for some reason. So now I changed the input to something that will make the program fail. Also note that field1 is a nullable integer datatype.

    Read the article

  • Debugging SQL in PGAdmin3 when sql contains variables

    - by Mr Shoubs
    In SQL Server I could copy sql code out of an application and paste it into SSMS, declare & assign vars that exist in the sql and run.. yay great debugging scenario. e.g. (please note I am rust and syntax may be incorrect) declare @x as varchar(10) set @x = 'abc' select * from sometable where somefield = @x I want to do something simular with postgres in pgadmin3 (or another postgres tool, anyy reccomendations?) I realise you can create pgscript, but it doesn't appear to be very good, for example, if I do the equlivent of above, it doesn't put the single quotes around the value in @x, nor does it let me by doubling them up and you don't get a table out after - only text... Currently I have a peice of sql someone has written that has 3 unique varibles in it which are used around 6 times each... So the question is how do other people debug sql this sql EFFICIENTLY, preferably in a simular fashion to my sql server days.

    Read the article

  • Inserting Newline from XML to Database

    - by blackmage
    I am trying to parse this xml document in which a newline is required for certain fields and must be inserted into the database with the newline. But I've been running into problems. 1)First Problem: \n Character The first problem I had was using the \n like below. <javascript>jquery_ui.js\nshadowbox_modal.js\nuser_profile.js\ntablesorter.js</javascript> The problem was in the database the field came out ot be jquery_ui.js\nshadowbox_modal.js\n... and when output into html it was jquery_ui.jsnshadowbox_modal.jsn............... 2) Then I tried actually having newlines in the xml <javascript>jquery_ui.js shadowbox_modal.js user_profile.js tablesorter.js</javascript> The problem was the output become %20%20%20%20%20%20%20%20%20%20shadowbox_modal.js, and so forth. So how can I get a newline to hold from xml when entered into a database and then output with the newline still?

    Read the article

  • postgres - regex_replace in distinct clause?

    - by n00b0101
    Ok... changing the question here... I'm getting an error when I try this: SELECT COUNT ( DISTINCT mid, regexp_replace(na_fname, '\\s*', '', 'g'), regexp_replace(na_lname, '\\s*', '', 'g')) FROM masterfile; Is it possible to use regexp in a distinct clause like this? The error is this: WARNING: nonstandard use of \\ in a string literal LINE 1: ...CT COUNT ( DISTINCT mid, regexp_replace(na_fname, '\\s*', ''...

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • SQL for selecting only the last item from a versioned table

    - by Jeremy
    I have a table with the following structure: CREATE TABLE items ( id serial not null, title character varying(255), version_id integer DEFAULT 1, parent_id integer, CONSTRAINT items_pkey PRIMARY KEY (id) ) So the table contains rows that looks as so: id: 1, title: "Version 1", version_id: 1, parent_id: 1 id: 2, title: "Version 2", version_id: 2, parent_id: 1 id: 3, title: "Completely different record", version_id: 1, parent_id: 3 This is the SQL code I've got for selecting out all of the records with the most recent version ids: select * from items inner join ( select parent_id, max(version_id) as version_id from items group by parent_id) as vi on items.version_id = vi.version_id and items.parent_id = vi.parent_id Is there a way to write that SQL statement so that it doesn't use a subselect?

    Read the article

  • Can't install egenix-mx-base on Django production VPS

    - by Shane
    I have been following these instructions for setting up a Django production server with postgres, apache, nginx, and memcache. My problem is that I cannot get egenix-mx-base to install and without this I cannot get psycopg2 to work and therefore no database access :(. I am attempting this on a VPS running a clean install of Ubuntu Hardy (8.04) and have followed all instructions on the site to a T. The error message is as follows: $ easy_install egenix-mx-base Searching for egenix-mx-base Reading http://pypi.python.org/simple/egenix-mx-base/ Reading http://www.egenix.com/products/python/mxBase/ Reading http://www.lemburg.com/python/mxExtensions.html Reading http://www.egenix.com/ Best match: egenix-mx-base 3.1.3 Downloading http://downloads.egenix.com/python/egenix-mx-base-3.1.3.tar.gz Processing egenix-mx-base-3.1.3.tar.gz Running egenix-mx-base-3.1.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-iF7qzl/egenix-mx-base-3.1.3/egg-dist-tmp-laxvcS Warning: Can't read registry to find the necessary compiler setting Make sure that Python modules _winreg, win32api or win32con are installed. In file included from mx/TextTools/mxTextTools/mxte.c:42: mx/TextTools/mxTextTools/mxte_impl.h: In function ‘mxTextTools_TaggingEngine’: mx/TextTools/mxTextTools/mxte_impl.h:345: warning: pointer targets in initialization differ in signedness mx/TextTools/mxTextTools/mxte_impl.h:364: warning: pointer targets in initialization differ in signedness mx/URL/mxURL/mxURL.c: In function ‘mxURL_SetFromString’: mx/URL/mxURL/mxURL.c:676: warning: pointer targets in initialization differ in signedness mx/UID/mxUID/mxUID.c: In function ‘mxUID_Verify’: mx/UID/mxUID/mxUID.c:333: warning: pointer targets in passing argument 1 of ‘sscanf’ differ in signedness mx/UID/mxUID/mxUID.c: In function ‘mxUID_New’: mx/UID/mxUID/mxUID.c:462: warning: pointer targets in passing argument 1 of ‘mxUID_CRC16’ differ in signedness error: Setup script exited with error: build/bdist.linux-x86_64-py2.5_ucs4/dumb/egenix_mx_base-3.1.3-py2.5.egg-info: Is a directory Thank you to anyone who takes the time to try to help me.

    Read the article

  • Generic Method to find the tuples used for computation in Postgres?

    - by Rahul
    If I have a table col1 | name | pay ------+------------------+------ 1 | Steve Jobs | 1006 2 | Mike Markkula | 1007 3 | Mike Scott | 1978 4 | John Sculley | 1983 5 | Michael Spindler | 1653 The user executes a sum query which sums the pay of people getting paid more than $1500. Is there a way to also implicitly know which tuples have been used which satisfy the condition for sum ? I know you can separately write another query to just return the primary key ids which satisfy the condition. But, Is there any other way to do that in the same query ? probably rewrite the query in some way ? or... any suggestion ?

    Read the article

  • Porting join from Oracle to Postgres

    - by Grasper
    INSERT INTO MISSION_OBJECTIVE( MSN_INT_ID, MO_INT_ID, MO_MSN_CLASS_NM, MO_MSN_CLASS_CD, MO_MSN_TYPE, MO_PRIORITY, MO_COMMENT, MO_START_DT, MO_END_DT, ASP_AIRSPACE_NM, MO_OBJ_LOCATION, MO_ALO_LEG_ID, MO_ALO_ARRIVE_LOC) SELECT '1025', '1', 'AIRDROP', 'ADP', 'LAPES', NULL, COALESCE( NULL, ' '), TO_TIMESTAMP( '1002260900', 'YYMMDDHH24MI'), TO_TIMESTAMP( '1002260915', 'YYMMDDHH24MI'), 'TRANSIT ALPHA', 'TRANSIT ALPHA', '1', 'TRANSIT ALPHA' FROM AIRSPACE ASP, apsmain .MISSION_CLASS MC WHERE ASP.ASP_AIRSPACE_NM(+)= 'TRANSIT ALPHA' AND MC.MCS_MISSION_CLASS_NAME= 'AIRDROP' AND 'TRANSIT ALPHA' IS NOT NULL Is that exactly the same as: INSERT INTO MISSION_OBJECTIVE( MSN_INT_ID, MO_INT_ID, MO_MSN_CLASS_NM, MO_MSN_CLASS_CD, MO_MSN_TYPE, MO_PRIORITY, MO_COMMENT, MO_START_DT, MO_END_DT, ASP_AIRSPACE_NM, MO_OBJ_LOCATION, MO_ALO_LEG_ID, MO_ALO_ARRIVE_LOC) SELECT '1025', '1', 'AIRDROP', 'ADP', 'LAPES', NULL, COALESCE( NULL, ' '), TO_TIMESTAMP( '1002260900', 'YYMMDDHH24MI'), TO_TIMESTAMP( '1002260915', 'YYMMDDHH24MI'), 'TRANSIT ALPHA', 'TRANSIT ALPHA', '1', 'TRANSIT ALPHA' FROM AIRSPACE ASP, apsmain .MISSION_CLASS MC WHERE ASP.ASP_AIRSPACE_NM = 'TRANSIT ALPHA' AND MC.MCS_MISSION_CLASS_NAME= 'AIRDROP' AND 'TRANSIT ALPHA' IS NOT NULL I just deleted the (+). The part that is confusing me is that ASP.ASP_AIRSPACE_NM is being right joined to a constant.

    Read the article

  • I don't find the sql request

    - by user301089
    Hi everybody, Here it's my problem I've a list of the following measure : src1 dst2 24th december 2009 src1 dst3 22th december 2009 src1 dst2 18th december 2009 I would like to have just the latest measures with a sql request - 2 first lines in my case because the pairs(src and dst) aren't the same. I try to use DISTINCT but I have just the 2 first columns and I will all columns. I try too GROUP BY but I hadn't success. Anyone can help me ? Thx Narglix

    Read the article

  • PL/PGSQL function, having trouble accessing a returned result set from psycopg2...

    - by Paul
    I have this pl/pgsql function: CREATE OR REPLACE FUNCTION get_result(id integer) RETURNS SETOF my_table AS $ DECLARE result_set my_table%ROWTYPE; BEGIN IF id=0 THEN SELECT INTO result_set my_table_id, my_table_value FROM my_table; ELSE SELECT INTO result_set my_table_id, my_table_value FROM my_table WHERE my_table_id=id; END IF; RETURN; END; $ LANGUAGE plpgsql; I am trying to use this with Python's psycopg2 library. Here is the python code: import psycopg2 as pg conn = pg.connect(host='myhost', database='mydatabase', user='user', password='passwd') cur = conn.cursor() return cur.execute("SELECT * FROM get_result(0);") # returns NoneType However, if i just do the regular query, I get the correct set of rows back: ... return cur.execute("SELECT my_table_id, my_table_value FROM mytable;") # returns iterable result set Theres obviously something wrong with my pl/pgsql function, but I can't seem to get it right. I also tried using RETURN result_set; instead of just RETURN in the 10th line of my plpgsql function, but got an error from postgres.

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • jdbc query - date ranges as parameters

    - by pstanton
    Hi all, I'd like to write a single JDBC statement that can handle the equivalent of any number of NOT BETWEEN date1 AND date2 where clauses. By single query, i mean that the same SQL string will be used to create the JDBC statements and then have different parameters supplied. This is so that the underlying frameworks can efficiently cache the query (i've been stung by that before). Essentially, I'd like to find a query that is the equivalent of SELECT * FROM table WHERE mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? and at the same time could be used with fewer parameters: SELECT * FROM table WHERE mydate NOT BETWEEN ? AND ? or more parameters SELECT * FROM table WHERE mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? I will consider using a temporary table if that will be simpler and more efficient. thanks for the help!

    Read the article

  • Table with a lot of attributes

    - by Robert
    Hi, I'm planing to build some database project. One of the tables have a lot of attributes. My question is: What is better, to divide the the class into 2 separate tables or put all of them into one table. below is an example create table User { id, name, surname,... show_name, show_photos, ...) or create table User { id, name, surname,... ) create table UserPrivacy {usr_id, show_name, show_photos, ...) The performance i suppose is similar due to i can use index.

    Read the article

  • Escaping colons in hibernate createSQLQuery

    - by Stratosgear
    I am confused on how I can create an SQL statement containing colons. I am trying to create a view and I am using (notice the double colons): create view MyView as ( SELECT tableA.colA as colA, tableB.colB as colB, round(tableB.colD / 1024)::numeric, 2) as calcValue, FROM tableA, tableB WHERE tableA.colC = 'someValue' ); This is a postgres query and I am forced to use the double colons (::) in order to correctly run the statement. I then pass the above statement through: s.createSQLQuery(myQuery).executeUpdate(); and I get a: Exception in thread "main" org.hibernate.exception.DataException: \ could not execute native bulk manipulation query at org.hibernate.exception.SQLStateConverter.convert(\ SQLStateConverter.java:102) ... more stacktrace... with an output of my above statement changed as (notice the question mark): create view MyView as ( SELECT tableA.colA as colA, tableB.colB as colB, round(tableB.colD / 1024)?, 2) as calcValue, FROM tableA, tableB WHERE tableA.colC = 'someValue' ); Obviously, hibernate confuses my colons with named parameters. Is there a way to escape the colons (a google suggestion that mentions that a single colon is escaped as a double colon does NOT work) or another way of running this statement? Thanks.

    Read the article

  • INSERT and transaction serialization in PostreSQL

    - by Alexander
    I have a question. Transaction isolation level is set to serializable. When the one user opens a transaction and INSERTs or UPDATEs data in "table1" and then another user opens a transaction and tries to INSERT data to the same table, does the second user need to wait 'til the first user commits the transaction?

    Read the article

  • Error in Postgres execute

    - by RAJA
    I'm using this function... -- Function: dbo.sp_acc_createaccount(character varying, integer, integer, character varying, character varying, character varying, character varying) -- DROP FUNCTION dbo.sp_acc_createaccount(character varying, integer, integer, character varying, character varying, character varying, character varying); CREATE OR REPLACE FUNCTION dbo.sp_acc_createaccount(IN in_orgmgrtype character varying, INOUT in_parentid integer, IN in_levelid integer, IN in_name character varying, IN in_phone character varying, IN in_webpage character varying, IN in_owner character varying, OUT out_accountid integer) RETURNS record AS $BODY$ DECLARE l_CoID int; l_CurrID int; l_OrgMgrId int; errmsg varchar(250); BEGIN IF in_ParentID = -1 THEN errmsg := 'execute sp_Acc_GetCompanyIDForUser failed'; l_CoID := dbo.sp_Acc_GetCompanyIDForUser(in_user); IF l_CoID = -2 THEN RAISE EXCEPTION 'execute sp_Acc_GetCompanyIDForUser failed'; END IF; errmsg := 'execute sp_Acc_GetOrgMgrIDForCompany failed'; l_OrgMgrID := dbo.sp_Acc_GetOrgMgrIDForCompany(in_OrgMgrType, l_CoID); IF l_OrgMgrID = -2 THEN RAISE EXCEPTION 'execute sp_Acc_GetOrgMgrIDForCompany failed'; END IF; in_ParentID := l_OrgMgrID; ELSE errmsg := 'Select orgmgrid failed'; SELECT OrgMgrID INTO l_CurrID FROM dbo.OrgMgr WHERE Name = in_Name AND ParentID = in_ParentID; END IF; -- if not, add it IF l_CurrID IS NULL THEN errmsg := 'Insert into orgmgr(account creation) failed'; INSERT INTO dbo.OrgMgr (ParentID, LevelID, Name, PrimaryPhone, WebPage, Owner) VALUES (in_ParentID, in_LevelID, in_Name, in_Phone, in_WebPage, in_Owner); out_AccountID := currval('dbo.OrgMgr_accountid_seq'); ELSE out_AccountID := -1; END IF; COMMIT; EXCEPTION WHEN RAISE_EXCEPTION THEN out_AccountID := 99; RAISE NOTICE 'ERROR : %',errmsg; WHEN OTHERS THEN out_AccountID := 99; RAISE EXCEPTION 'ERROR : %',errmsg; END $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100; ALTER FUNCTION dbo.sp_acc_createaccount(character varying, integer, integer, character varying, character varying, character varying, character varying) OWNER TO postgres; But.. it's showing error in execute time .. ERROR: SPI_execute_plan failed executing query "ROLLBACK": SPI_ERROR_TRANSACTION

    Read the article

  • Is there anything as good as TOAD for Postgres (Windows)?

    - by misc090912
    Hi guys, I'm just looking for a management tool like TOAD for Postgres. Anyone used a good one? Edit - I work mostly within the data itself and the database already has a mature model/design. I use the edit windows the most (well, in TOAD for Oracle anyway.) As far as I know, Toad only exists naturally for: Oracle, MS SQL, DB2 and MySQL... --JS

    Read the article

  • Database query optimization

    - by hdx
    Ok my Giant friends once again I seek a little space in your shoulders :P Here is the issue, I have a python script that is fixing some database issues but it is taking way too long, the main update statement is this: cursor.execute("UPDATE jiveuser SET username = '%s' WHERE userid = %d" % (newName,userId)) That is getting called about 9500 times with different newName and userid pairs... Any suggestions on how to speed up the process? Maybe somehow a way where I can do all updates with just one query? Any help will be much appreciated! PS: Postgres is the db being used.

    Read the article

  • Possible to rank partial matches in Postgres full text search?

    - by Joe
    I'm trying to calculate a ts_rank for a full-text match where some of the terms in the query may not be in the ts_vector against which it is being matched. I would like the rank to be higher in a match where more words match. Seems pretty simple? Because not all of the terms have to match, I have to | the operands, to give a query such as to_tsquery('one|two|three') (if it was &, all would have to match). The problem is, the rank value seems to be the same no matter how many words match. In other words, it's maxing rather than multiplying the clauses. select ts_rank('one two three'::tsvector, to_tsquery('one')); gives 0.0607927. select ts_rank('one two three'::tsvector, to_tsquery('one|two|three|four')); gives the expected lower value of 0.0455945 because 'four' is not the vector. But select ts_rank('one two three'::tsvector, to_tsquery('one|two')); gives 0.0607927 and likewise select ts_rank('one two three'::tsvector, to_tsquery('one|two|three')); gives 0.0607927 I would like the result of ts_rank to be higher if more terms match. Possible? To counter one possible response: I cannot calculate all possible subsequences of the search query as intersections and then union them all in a query because I am going to be working with large queries. I'm sure there are plenty of arguments against this anyway! Edit: I'm aware of ts_rank_cd but it does not solve the above problem.

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >