Search Results

Search found 1505 results on 61 pages for 'postgresql 8 4'.

Page 38/61 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • SQL get data out of BEGIN; ...; END; block in python

    - by Claudiu
    I want to run many select queries at once by putting them between BEGIN; END;. I tried the following: cur = connection.cursor() cur.execute(""" BEGIN; SELECT ...; END;""") res = cur.fetchall() However, I get the error: psycopg2.ProgrammingError: no results to fetch How can I actually get data this way? Likewise, if I just have many selects in a row, I only get data back from the latest one. Is there a way to get data out of all of them?

    Read the article

  • How can I execute a SQL query in emacs lisp?

    - by Chris R
    I want to execute an SQL query and get its result in elisp: (let ((results (do-sql-query "SELECT * FROM a_table"))) (do-something-with results)) I'm using Postgres, and I already know all of my connection information (host, username, password, db et al) I just want to execute the query and get the result back, synchronously.

    Read the article

  • What is the best Django syncdb crash debugging technique ?

    - by user367752
    What is the best Django syncdb crash debugging technique ? I've previously asked a question about a problem with manage.py syncdb returning an exception and the answer was that the app has a wrong import. http://stackoverflow.com/questions/2734721/django-manage-py-syncdb-not-working I'd like to know the technique used to find the place where there is a wrong import. I tried ./manage.py syncdb --verbosity=2 but I didn't get any more information that way.

    Read the article

  • postgres store with composite value type, or a better way of attributing an inverted index

    - by Hassan Syed
    can't seem to figure out the syntax for populating a hstore with a value of composite type -- note: I do not want to convert a record to a hstore. select hstore('hello => ROW(1,2)'); I know it's something simple; However, google is not my friend today. use case : custom inverted index. The data is modelling an inverted index of lexemes, the composite data types are various probabilities related to the lexemes which I will use to implement document clustering. Does anyone know a better way of doing this ? I'm open to using an external system if it allows attaching attributes to key-posting pairs in the inverted index. I'd use something external if it had solid support for what I am trying to do, I suspect that sticking 3-10k lexemes per tuple and then doing batch processing on them is gonna be nasty as the whole hstore will have to be parsed and converted .

    Read the article

  • Transaction Isolation on select, insert, delete

    - by Bradford
    What could possibly go wrong with the following transaction if executed by concurrent users in the default isolation level of READ COMMITTED? BEGIN TRANSACTION SELECT * FROM t WHERE pid = 10 and r between 40 and 60 -- ... this returns tid = 1, 3, 5 -- ... process returned data ... DELETE FROM t WHERE tid in (1, 3, 5) INSERT INTO t (tid, pid, r) VALUES (77, 10, 35) INSERT INTO t (tid, pid, r) VALUES (78, 10, 37) INSERT INTO t (tid, pid, r) VALUES (79, 11, 39) COMMIT

    Read the article

  • Why would a TableAdapter populate a DataSet with "1/1/2000" for an entire timestamp column?

    - by Rob
    I have a TableAdapter filling a DataSet, and for some reason every select query populates my timestamp column with the value 1/1/2000 for every selected row. I first verified that original values are intact on the DB side; for the most part, they are, although it seems a few rows lost their original timestamp because of update queries performed programmatically before the issue was discovered. The DataColumn type is DateType, while the database (Postgres) column type is timestamp. Up until recently, this was all playing very nicely. I noticed the issue in a bound DataGridView control, and verified that this is not related to data binding by utilizing the 'Preview Data' option in the VS DataSet Editor. Usually when I notice unexpected values popping up in my application it's related to a mis-configured property, type conflict, or another silly mistake I've made. So after checking properties and types, and even recreating the TableAdapter from scratch, to say I'm a little baffled is an understatement. Does anyone have any ideas of what I could do to fix the issue and/or diagnose the cause?

    Read the article

  • Foreign key refering to primary keys across multiple tables?

    - by sanjay bharkatiya
    hi , i have three tables say city,state and road 1) city - city_id(PK),name 2) state- Stt_id(PK),name 3) Road- Edge_id(PK), Admin_id(FK) where Admin_id refers to city_id and Stt_id both. This is done because the tables are too huge. say city_id contains 1,2,3 and Stt_id contains 4,5,6 now if i am inserting 1,2,3,4,5,6 in admin_id it is throuing an error .. what is the solution of my problem, regards sanjay

    Read the article

  • Is SQL server the best DB for Storing and comparing images in database for a small ecommerce applica

    - by iecut
    I have been trying to create a small e-commerce web based application using MS Dot Net framework. The application will let the user allow to store the image of their product that they want to sell or purchase, then they will have the option to upload the image of a particular product and compare that image with the similar images in the database. So my two main concerns are: - Is MS SQL a good option to store and compare the images. - Is the any other better database that can do the same work with less complexity of work and that is also easy to integrate with MS dot net framework.

    Read the article

  • Tomcat Postgres Connection

    - by user191207
    Hi, I'm using a singleton class for a PostgresSQL connection inside a servelet. The problem is that once it is open it works for a while (I guess until some timeout), and then it starts throwing a I/O exception. Any idea what is happening to the singleton class inside Tomcat VM? Thanks

    Read the article

  • Which database and language is better at handling Unicode?

    - by user187809
    which database should I use, if my application is going to be in multiple languages (including Chinese, Japanese etc)? In other words, is MySQL better or worse than Postgres to handle unicode etc? (these are the only two databases my hosting company has) Also, which language is better for handling unicode? PHP or Ruby/Rails?

    Read the article

  • Firing Postgres triggers on different table columns

    - by aatifh
    CONTENT_TABLE id | author | timestamp | title | description ----+-----------------+-----------+----------------+---------------------- (0 rows) SEARCH_TABLE id | content_type_id | object_id | tsvector_title | tsvector_description ----+-----------------+-----------+----------------+---------------------- (0 rows) I have to fire a trigger when ever CONTENT_TABLE is UPDATED/INSERTED Something like this: "CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE ON course_course FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger(SHOULD_BE_THE_COLUMN_OF_SEARCH_TABLE(tsvector_description), 'pg_catalog.english', description);" Actually, i have to add tsvector for title and description of the CONTENT_TABLE to the table SEARCH_TABLE tsvector_title and tsvector_description. Can i just fire one trigger for it? Any sort of help will be appreciated. Thanks in advance.

    Read the article

  • Postgre database ignoring created index ?!

    - by drasto
    I have an Postgre database and a table called my_table. There are 4 columns in that table (id, column1, column2, column3). The id column is primary key, there are no other constrains or indexes on columns. The table has about 200000 rows. I want to print out all rows which has value of column column2 equal(case insensitive) to 'value12'. I use this: SELECT * FROM my_table WHERE column2 = lower('value12') here is the execution plan for this statement(result of set enable_seqscan=on; EXPLAIN SELECT * FROM my_table WHERE column2 = lower('value12')): Seq Scan on my_table (cost=0.00..4676.00 rows=10000 width=55) Filter: ((column2)::text = 'value12'::text) I consider this to be to slow so I create an index on column column2 for better prerformance of searches: CREATE INDEX my_index ON my_table (lower(column2)) Now I ran the same select: SELECT * FROM my_table WHERE column2 = lower('value12') and I expect it to be much faster because it can use index. However it is not faster, it is as slow as before. So I check the execution plan and it is the same as before(see above). So it still uses sequential scen and it ignores the index! Where is the problem ?

    Read the article

  • Lazarus Pascal - DB Connection - clarification

    - by itsols
    The following code is from the docs here: Program ConnectDB var AConnection : TSQLConnection; Procedure CreateConnection; begin AConnection := TIBConnection.Create(nil); AConnection.Hostname := 'localhost'; AConnection.DatabaseName := '/opt/firebird/examples/employee.fdb'; AConnection.UserName := 'sysdba'; AConnection.Password := 'masterkey'; end; begin CreateConnection; AConnection.Open; if Aconnection.Connected then writeln('Succesful connect!') else writeln('This is not possible, because if the connection failed, ' + 'an exception should be raised, so this code would not ' + 'be executed'); AConnection.Close; AConnection.Free; end. The main body of the code makes sense to me BUT I don't get where TSQLConnection came from. I cannot use CTRL + Space to autocomplete it either, which means my program has no reference to it. I'm trying to connect to Postgres by the way. Can someone please state what TSQLConnection is? Thanks!

    Read the article

  • Excessive httpd processes to stack up on my Rails + Apache2 + Passenger production setup?

    - by LeoAlmighty
    I have a Rails + Apache2 + Postgres + Passenger application running in production mode in OSX Snow Leopard. The application serves as a data warehouse for another application in the cloud so I'm constantly getting API calls to my OSX production build. After a recent reboot, I'm finding a ton of httpd processes stacking up and eventually requiring an apache reboot. I haven't changed any settings, everything was running fine before. Any ideas on the best way to troubleshoot this? $ ps -ef|grep httpd 0 6203 1 0 0:00.20 ?? 0:00.47 /usr/sbin/httpd -D FOREGROUND 70 6222 6203 0 0:00.05 ?? 0:00.11 /usr/sbin/httpd -D FOREGROUND 70 6224 6203 0 0:00.31 ?? 0:00.50 /usr/sbin/httpd -D FOREGROUND 70 6233 6203 0 0:00.05 ?? 0:00.10 /usr/sbin/httpd -D FOREGROUND 70 6234 6203 0 0:00.43 ?? 0:00.64 /usr/sbin/httpd -D FOREGROUND 70 6243 6203 0 0:00.02 ?? 0:00.03 /usr/sbin/httpd -D FOREGROUND 70 6319 6203 0 0:00.08 ?? 0:00.16 /usr/sbin/httpd -D FOREGROUND 70 6334 6203 0 0:00.02 ?? 0:00.05 /usr/sbin/httpd -D FOREGROUND 70 6469 6203 0 0:00.04 ?? 0:00.08 /usr/sbin/httpd -D FOREGROUND 70 6487 6203 0 0:00.36 ?? 0:00.48 /usr/sbin/httpd -D FOREGROUND 70 6593 6203 0 0:00.36 ?? 0:00.48 /usr/sbin/httpd -D FOREGROUND 70 6709 6203 0 0:00.04 ?? 0:00.08 /usr/sbin/httpd -D FOREGROUND 70 6718 6203 0 0:00.04 ?? 0:00.10 /usr/sbin/httpd -D FOREGROUND 70 6834 6203 0 0:00.01 ?? 0:00.03 /usr/sbin/httpd -D FOREGROUND 70 6852 6203 0 0:00.00 ?? 0:00.00 /usr/sbin/httpd -D FOREGROUND 70 6853 6203 0 0:00.01 ?? 0:00.02 /usr/sbin/httpd -D FOREGROUND

    Read the article

  • PHP error can't figure it out something to do with SQL stuff I think

    - by MrEnder
    Ok the error is showing up somewhere in this here code if($error==false) { $query = pg_query("INSERT INTO chatterlogins(firstName, lastName, gender, password, ageMonth, ageDay, ageYear, email, createDate) VALUES('$firstNameSignup', '$lastNameSignup', '$genderSignup', md5('$passwordSignup'), $monthSignup, $daySignup, $yearSignup, '$emailSignup', now());"); $query = pg_query("INSERT INTO chatterprofileinfo(email, lastLogin) VALUES('$email', now())";); $_SESSION['$userNameSet'] = $email; header('Location: signup_step2.php'.$rdruri); } anyone see what I did wrong??? sorry for being so unspecific but ive been staring at it for 10 mins and I can't figure it out.

    Read the article

  • Updating records in Postgres using FROM clause

    - by Summer
    Hi, I'm changing my db schema, and moving column 'seat' from old_table to new_table. First I added a 'seat' column to new_table. Now I'm trying to populate the column with the values from old_table. UPDATE new_table SET seat = seat FROM old_table WHERE old_table.id = new_table.ot_id; This returns ERROR: column reference "seat" is ambiguous. UPDATE new_table nt SET nt.seat = ot.seat FROM old_table ot WHERE ot.id = nt.ot_id; Returns ERROR: column "nt" of relation "new_table" does not exist Ideas?

    Read the article

  • How to change SRID of geometry column?

    - by Z77
    I have table where the one of columns is geometry column the_geom for polygons with SRID. I added new column in the same table with exactly the same geometry data as in the_geom. This another column has name the_geom4258 because I want here to set up another SRID=4258. So what is the procedure to set up another SRID geometry to be changed (in another coord.system)? Is just enough to apply following query: UPDATE table SET the_geom4258=ST_SetSRID(the_geom4258,4258);

    Read the article

  • SQL hidden techniques?

    - by AlexRednic
    What are those pro/subtle techniques that SQL provides and not many know about which also cut code and improve performance? eg: I have just learned how to use CASE statements inside aggregate functions and it totally changed my approach on things. Are there others?

    Read the article

  • calling a stored postgres function from php

    - by KittyYoung
    Just a little confused here... I have a function in postgres, and when I'm at the pg prompt, I just do: SELECT zp('zc',10,20,90); FETCH ALL FROM zc; I'm wondering how to do this from php? I thought I could just do: $q = pg_query("SELECT zp('zc',10,20,90)"); But, how do I "fetch" from that query?

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • Postgres error with Sinatra/Haml/DataMapper on Heroku

    - by sevennineteen
    I'm trying to move a simple Sinatra app over to Heroku. Migration of the Ruby app code and existing MySQL database using Taps went smoothly, but I'm getting the following Postgres error: PostgresError - ERROR: operator does not exist: text = integer LINE 1: ...d_at", "post_id" FROM "comments" WHERE ("post_id" IN (4, 17,... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. It's evident that the problem is related to a type mismatch in the query, but this is being issued from a Haml template by the DataMapper ORM at a very high level of abstraction, so I'm not sure how I'd go about controlling this... Specifically, this seems to be throwing up on a call of p.comments from my Haml template, where p represents a given post. The Datamapper models are related as follows: class Post property :id, Serial ... has n, :comments end class Comment property :id, Serial ... belongs_to :post end This works fine on my local and current hosted environment using MySQL, but Postgres is clearly more strict. There must be hundreds of Datamapper & Haml apps running on Postgres DBs, and this model relationship is super-conventional, so hopefully someone has seen (and determined how to fix) this. Thanks!

    Read the article

  • named_scope + average is causing the table to be specified more then once in the sql query run on po

    - by hadees
    I have a named scopes like so... named_scope :gender, lambda { |gender| { :joins => {:survey_session => :profile }, :conditions => { :survey_sessions => { :profiles => { :gender => gender } } } } } and when I call it everything works fine. I also have this average method I call... Answer.average(:rating, :include => {:survey_session => :profile}, :group => "profiles.career") which also works fine if I call it like that. However if I were to call it like so... Answer.gender('m').average(:rating, :include => {:survey_session => :profile}, :group => "profiles.career") I get... ActiveRecord::StatementInvalid: PGError: ERROR: table name "profiles" specified more than once : SELECT avg("answers".rating) AS avg_rating, profiles.career AS profiles_career FROM "answers" LEFT OUTER JOIN "survey_sessions" survey_sessions_answers ON "survey_sessions_answers".id = "answers".survey_session_id LEFT OUTER JOIN "profiles" ON "profiles".id = "survey_sessions_answers".profile_id INNER JOIN "survey_sessions" ON "survey_sessions".id = "answers".survey_session_id INNER JOIN "profiles" ON "profiles".id = "survey_sessions".profile_id WHERE ("profiles"."gender" = E'm') GROUP BY profiles.career Which is a little hard to read but says I'm including the table profiles twice. If I were to just remove the include from average it works but it isn't really practical because average is actually being called inside a method which gets passed the scoped. So there is some times gender or average might get called with out each other and if either was missing the profile include it wouldn't work. So either I need to know how to fix this apparent bug in Rails or figure out a way to know what scopes were applied to a ActiveRecord::NamedScope::Scope object so that I could check to see if they have been applied and if not add the include for average.

    Read the article

  • Problem with Postgres FOR LOOP

    - by user341831
    Hi all, Ich have a problem in postgres function: CREATE OR REPLACE FUNCTION linkedRepoObjects(id bigint) RETURNS int AS $$ DECLARE catNumber int DEFAULT 0; DECLARE cat RECORD; BEGIN WITH RECURSIVE children(categoryid,category_fk) AS ( SELECT categoryid, category_fk FROM b2m.category_tab WHERE categoryid = 1 UNION ALL SELECT c1.categoryid,c1.category_fk FROM b2m.category_tab c1, children WHERE children.categoryid = c1.category_fk ) FOR cat IN SELECT * FROM children LOOP IF EXISTS (SELECT 1 FROM b2m.repoobject_tab WHERE category_fk = cat.categoryid) THEN catNumber = catNumber +1 END IF; END LOOP; RETURN catNumber; END; $$ LANGUAGE 'plpgsql'; I've got error: FEHLER: Syntaxfehler bei »FOR« LINE 1: ...dren WHERE children.categoryid = c1.category_fk ) FOR $2 I... I'm a newbee in Postgres. Please help. Thanx in advance

    Read the article

  • Distribute budget over for ranked components in SQL

    - by Lee
    Assume I have a budget of $10 (any integer) and I want to distribute it over records which have rank field with varying needs. Example: rank Req. Fulfilled? 1 $3 Y 2 $4 Y 3 $2 Y 4 $3 N Those ranks from 1 to 3 should be fulfilled because they are within budget. whereas, the one ranked 4 should not. I want an SQL query to solve that. Below is my initial script: CREATE TABLE budget ( id VARCHAR (32), budget INTEGER, PRIMARY KEY (id)); CREATE TABLE component ( id VARCHAR (32), rank INTEGER, req INTEGER, satisfied BOOLEAN, PRIMARY KEY (id)); INSERT INTO budget (id,budget) VALUES ('1',10); INSERT INTO component (id,rank,req) VALUES ('1',1,3); INSERT INTO component (id,rank,req) VALUES ('2',2,4); INSERT INTO component (id,rank,req) VALUES ('3',3,2); INSERT INTO component (id,rank,req) VALUES ('4',4,3); Thanks in advance for your help. Lee

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >