Search Results

Search found 3721 results on 149 pages for 'postgresql newbie'.

Page 42/149 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Finding group maxes in SQL join result

    - by Gene
    Two SQL tables. One contestant has many entries: Contestants Entries Id Name Id Contestant_Id Score -- ---- -- ------------- ----- 1 Fred 1 3 100 2 Mary 2 3 22 3 Irving 3 1 888 4 Grizelda 4 4 123 5 1 19 6 3 50 Low score wins. Need to retrieve current best scores of all contestants ordered by score: Best Entries Report Name Entry_Id Score ---- -------- ----- Fred 5 19 Irving 2 22 Grizelda 4 123 I can certainly get this done with many queries. My question is whether there's a way to get the result with one, efficient SQL query. I can almost see how to do it with GROUP BY, but not quite. In case it's relevant, the environment is Rails ActiveRecord and PostgreSQL.

    Read the article

  • get n records at a time from a temporary table

    - by Claudiu
    I have a temporary table with about 1 million entries. The temporary table stores the result of a larger query. I want to process these records 1000 at a time, for example. What's the best way to set up queries such that I get the first 1000 rows, then the next 1000, etc.? They are not inherently ordered, but the temporary table just has one column with an ID, so I can order it if necessary. I was thinking of creating an extra column with the temporary table to number all the rows, something like: CREATE TEMP TABLE tmptmp AS SELECT ##autonumber somehow##, id FROM .... --complicated query then I can do: SELECT * FROM tmptmp WHERE autonumber>=0 AND autonumber < 1000 etc... how would I actually accomplish this? Or is there a better way? I'm using Python and PostgreSQL.

    Read the article

  • create temporary table from cursor

    - by Claudiu
    Is there any way, in PostgreSQL accessed from Python using SQLObject, to create a temporary table from the results of a cursor? Previously, I had a query, and I created the temporary table directly from the query. I then had many other queries interacting w/ that temporary table. Now I have much more data, so I want to only process 1000 rows at a time or so. However, I can't do CREATE TEMP TABLE ... AS ... from a cursor, not as far as I can see. Is the only thing to do something like: rows = cur.fetchmany(1000); cur2 = conn.cursor() cur2.execute("""CREATE TEMP TABLE foobar (id INTEGER)""") for row in rows: cur2.execute("""INSERT INTO foobar (%d)""" % row) or is there a better way? This seems awfully inefficient.

    Read the article

  • rake test and test_structure.sql

    - by korinthe
    First of all, I have to run "rake RAILS_ENV=test ..." to get the test suites to hit my test DB. Annoying but ok to live with. However when I do so, I get a long stream of errors like so: > rake RAILS_ENV=test -I test test:units psql:/path/to/project/db/test_structure.sql:33: ERROR: function "armor" already exists with same argument types [and many more] It looks like some DB definitions are getting unnecessarily reloaded. I can't find any mention of this on Google, so I was wondering whether others have seen this? I am using a PostgreSQL database with the following in my environment.rb: config.active_record.schema_format = :sql and using Rails 2.3.5 with rake 0.8.7.

    Read the article

  • select row from table and substitute a field with one from another column if it exists

    - by EarthMind
    I'm trying construct a PostgreSQL query that does the following but so far my efforts have been in vain. Problem: There are two tables: A and B. I'd like to select all columns from table A (having columns: id, name, description) and substitute the "A.name" column with the value of the column "B.title" from table B (having columns: id, table_A_id title, langcode) where B.table_A_id is 5 and B.langcode is "nl" (if there are any rows). My attempts: SELECT A.name, case when exists(select title from B where table_A_id = 5 and langcode= 'nl') then B.title else A.name END FROM A, B WHERE A.id = 5 and B.table_A_id = 5 and B.langcode = 'nl' -- second try: SELECT COALESCE(B.title, A.name) as name from A, B where A.id = 5 and B.table_A_id = 5 and exists(select title from B where table_A_id = 5 and langcode= 'nl') I've tried using a CASE and COALESCE() but failed due to my inexperience with both concepts. Thanks in advance.

    Read the article

  • select row from table and substitute a field with one from another column if not null

    - by EarthMind
    I'm trying construct a PostgreSQL query that does the following but so far my efforts have been in vain. Problem: There are two tables: A and B. I'd like to select all columns from table A (having columns: id, name, description) and substitute the "A.name" column with the value of the column "B.title" from table B (having columns: id, table_A_id title, langcode) where B.table_A_id is 5 and B.langcode is "nl" (if there are any rows). I've tried using a CASE and COALESCE() but failed due to my inexperience with both concepts. Thanks in advance.

    Read the article

  • How to provide an API client with 1,000,000 database results?

    - by Chris Dutrow
    What is a good way to provide an API client with 1,000,000 database results? We are cureently using PostgreSQL. A few suggested methods: Paging using Cursors Paging using random numbers ( Add "GREATER THAN ORDER BY " to each query ) Save information to a file and let the client download it Iterate through results, then POST the data to the client server Return only keys to the client, then let the client request the objects from Cloud files like Amazon S3 (still may require paging just to get the file names ). What haven't I thought of that is stupidly simple and way better than any of these options?

    Read the article

  • SQL Select between two fields depending on the value of one field

    - by Filip
    Hi. I am using a PostgreSQL database, and in a table representing some measurements I've two columns: measurement, and interpolated. In the first I've the observation (measurement), and b is the interpolated value depending on nearby values. Every record with an original value has also an interpolated value. However, there are a lot of records without "original" observations (NULL), hence the values are interpolated and stored in the second column. So basically there are just two cases in the database: Value Value NULL Value Of course, it is preferable to use the value from the first column if available, hence I need to build a query to select the data from the first column, and if not available (NULL), then the database returns the value from the second column for the record in question. I have no idea how to build the SQL query. Please help. Thanks.

    Read the article

  • How to delete unused sequences?

    - by user1023877
    We are using PostgreSQL. My requirement is to delete unused sequences from my database. For example, if I create any table through my application, one sequence will be created, but for deleting the table we are not deleting the sequence, too. If want to create the same table another sequence is being created. Example: table: file; automatically created sequence for id coumn: file_id_seq When I delete the table file and create it with same name again, a new sequence is being created (i.e. file_id_seq1). I have accumulated a huge number of unused sequences in my application database this way. How to delete these unused sequences?

    Read the article

  • How to save an order (permutation) in an sql db

    - by Bendlas
    I have a tree structure in an sql table like so: CREATE TABLE containers ( container_id serial NOT NULL PRIMARY KEY, parent integer REFERENCES containers (container_id)) Now i want to define an ordering between nodes with the same parent. I Have thought of adding a node_index column, to ORDER BY, but that seem suboptimal, since that involves modifying the index of a lot of nodes when modifying the stucture. That could include adding, removing, reordering or moving nodes from some subtree to another. Is there a sql datatype for an ordered sequence, or an efficient way to emulate one? Doesn't need to be fully standard sql, I just need a solution for mssql and hopefully postgresql EDIT To make it clear, the ordering is arbitrary. Actually, the user will be able to drag'n'drop tree nodes in the GUI

    Read the article

  • sql select from a large number of IDs

    - by Claudiu
    I have a table, Foo. I run a query on Foo to get the ids from a subset of Foo. I then want to run a more complicated set of queries, but only on those IDs. Is there an efficient way to do this? The best I can think of is creating a query such as: SELECT ... --complicated stuff WHERE ... --more stuff AND id IN (1, 2, 3, 9, 413, 4324, ..., 939393) That is, I construct a huge "IN" clause. Is this efficient? Is there a more efficient way of doing this, or is the only way to JOIN with the inital query that gets the IDs? If it helps, I'm using SQLObject to connect to a PostgreSQL database, and I have access to the cursor that executed the query to get all the IDs.

    Read the article

  • Efficient job progress update in web application

    - by Endru6
    Hi, Creating a web application (Django in my case, but I think the question is more general) that is administrating a cluster of workers doing queued jobs, there is a need to track each jobs progress. When I've done it using database UPDATE (PostgreSQL in this case), it severely hits the database performance, because each UPDATE creates a new row in a table, and in my case only vacuuming DB removes obsolete rows. Having 30 jobs running and reporting progress every 1 minute DB may require vacuuming (and it means huge slow downs on a front end side for all the employees working with the system) every 10 days. Because the progress information isn't critical, ie. it doesn't have to be persistent, how would you do the progress updates from jobs without using an overhead database implies? There are 30 worker servers, each doing 1 or 2 jobs simultaneously, 1 front end server which serves a web application to users, and 1 database server.

    Read the article

  • how to use constants in SQL CREATE TABLE?

    - by kchiu
    Hi, I have 3 SQL tables, defined as follows: CREATE TABLE organs( abbreviation VARCHAR(16), -- ... other stuff ); CREATE TABLE blocks( abbreviation VARCHAR(16), -- ... other stuff ); CREATE TABLE slides( title VARCHAR(16), -- ... other stuff ); The 3 fields above all use VARCHAR(16) because they're related and have the same length restriction. Is there a (preferably portable) way to put '16' into a constant / variable and reference that instead in CREATE TABLE? eg. something like this would be nice: CREATE TABLE slides( title VARCHAR(MAX_TITLE_LENGTH), -- ... other stuff ); I'm using PostgreSQL 8.4. thanks a lot, and Happy New Year! cheers.

    Read the article

  • How to design a database schema for storing text in multiple languages?

    - by stach
    We have a PostgreSQL database. And we have several tables which need to keep certain data in several languages (the list of possible languages is thankfully system-wide defined). For example lets start with: create table blah (id serial, foo text, bar text); Now, let's make it multilingual. How about: create table blah (id serial, foo_en text, foo_de text, foo_jp text, bar_en text, bar_de text, bar_jp text); That would be good for full-text search in Postgres. Just add a tsvector column for each language. But is it optimal? Maybe we should use another table to keep the translations? Like: create table texts (id serial, colspec text, obj_id int, language text, data text); Maybe, just maybe, we should use something else - something out of the SQL world? Any help is appreciated.

    Read the article

  • PostgreSQL - can't save items - "type integer but expression is of type character"

    - by user984621
    I am getting still over and over again this error, the column age has the type integer, I am saving into this column integer-value, I also tried to don't save nothing into this column, but still getting this error... Could anyone help me, how to fix that? PG::Error: ERROR: column "age" is of type integer but expression is of type character varying at character 102 HINT: You will need to rewrite or cast the expression. : INSERT INTO "user_details" ("created_at", "age", "updated_at", "user_id") VALUES ($1, $2, $3, $4) RETURNING "id"

    Read the article

  • Cassandra or MySQL/PostgreSQL?

    - by Ivri
    Hi! I have huge database (kinda wordnet). And want to know if it's easier to use Cassandra instead of MySQL|PostrgreSQL All my life I was using MySQL and PostrgreSQL and I could easily think in terms of relational algebra, but several weeks ago I learned about cassandra and that it's used in facebook and twitter. Is it more convenient? What DBMS are usually used nowadays to store social net's data, relationships between objects, wordnet?

    Read the article

  • Unit Test For NpgsqlCommand With Rhino Mocks

    - by J Pollack
    My unit test keeps getting the following error: "System.InvalidOperationException: The Connection is not open." The Test [TestFixture] public class Test { [Test] public void Test1() { NpgsqlConnection connection = MockRepository.GenerateStub<NpgsqlConnection>(); // Tried to fake the open connection connection.Stub(x => x.State).Return(ConnectionState.Open); connection.Stub(x => x.FullState).Return(ConnectionState.Open); DbQueries queries = new DbQueries(connection); bool procedure = queries.ExecutePreProcedure("201003"); Assert.IsTrue(procedure); } } Code Under Test using System.Data; using Npgsql; public class DbQueries { private readonly NpgsqlConnection _connection; public DbQueries(NpgsqlConnection connection) { _connection = connection; } public bool ExecutePreProcedure(string date) { var command = new NpgsqlCommand("name_of_procedure", _connection); command.CommandType = CommandType.StoredProcedure; NpgsqlParameter parameter = new NpgsqlParameter {DbType = DbType.String, Value = date}; command.Parameters.Add(parameter); command.ExecuteScalar(); return true; } } How would you test the code using Rhino Mocks 3.6? PS. NpgsqlConnection is a connection to a PostgreSQL server.

    Read the article

  • Faster way to transfer table data from linked server

    - by spender
    After much fiddling, I've managed to install the right ODBC driver and have successfully created a linked server on SQL Server 2008, by which I can access my PostgreSQL db from SQL server. I'm copying all of the data from some of the tables in the PgSQL DB into SQL Server using merge statements that take the following form: with mbRemote as ( select * from openquery(someLinkedDb,'select * from someTable') ) merge into someTable mbLocal using mbRemote on mbLocal.id=mbRemote.id when matched /*edit*/ /*clause below really speeds things up when many rows are unchanged*/ /*can you think of anything else?*/ and not (mbLocal.field1=mbRemote.field1 and mbLocal.field2=mbRemote.field2 and mbLocal.field3=mbRemote.field3 and mbLocal.field4=mbRemote.field4) /*end edit*/ then update set mbLocal.field1=mbRemote.field1, mbLocal.field2=mbRemote.field2, mbLocal.field3=mbRemote.field3, mbLocal.field4=mbRemote.field4 when not matched then insert ( id, field1, field2, field3, field4 ) values ( mbRemote.id, mbRemote.field1, mbRemote.field2, mbRemote.field3, mbRemote.field4 ) WHEN NOT MATCHED BY SOURCE then delete; After this statement completes, the local (SQL Server) copy is fully in sync with the remote (PgSQL server). A few questions about this approach: is it sane? it strikes me that an update will be run over all fields in local rows that haven't necessarily changed. The only prerequisite is that the local and remote id field match. Is there a more fine grained approach/a way of constraining the merge statment to only update rows that have actually changed?

    Read the article

  • How can I convert a projection that's not part of spatial_ref_sys?

    - by Summer
    Hi, I'm importing shapefiles into a Postgres+PostGIS database. Here's my usual procedure: * Find an srid in the spatial_ref_sys table where srtext appears to match the shapefile's .prj file * Upload the data into a new table using the shp2pgsql utility, specifying the srid using the -s flag * Add the new table to my main geometry table, and on the way convert to an srid of 4269 (the Census standard projection) using ST_Transform Unfortunately, the spatial_ref_sys table doesn't include Mississippi state's standard projection. The contents of their .prj file is as follows, where I've bolded the parts I usually try to match: PROJCS["mstm",GEOGCS["GCS_North_American_1983",DATUM["D_North_American_1983",SPHEROID["GRS_1980",6378137.0,298.257222101]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]],PROJECTION["Transverse_Mercator"],PARAMETER["False_Easting",500000.0],PARAMETER["False_Northing",1300000.0],PARAMETER["Central_Meridian",-89.75],PARAMETER["Scale_Factor",0.9998335],PARAMETER["Latitude_Of_Origin",32.5],UNIT["Meter",1.0]] I eventually found the ogr2ogr utility, and especially with the "peace and joy" promises, I decided to give it a try. I tried this command: ogr2ogr -update -f "PostgreSQL" PG:"Connection details" "File name.shp" -t_srs EPSG:4269 -nln Table_Name I am now getting the error "Terminating translation prematurely after failed translation of layer" -- which seems to indicate that ogr2ogr is not going to be the savior I imagined in getting arbitrary .prj files neatly into the 4269 projection. Any ideas about what to do?

    Read the article

  • Using multiple aggregate functions in an (ANSI) SQL statement

    - by morpheous
    I have aggregate functions foo(), foobar(), fredstats(), barneystats() I want to create a domain specific query language (DSQL) above my DB, to facilitate using a domain language to query the DB. The 'language' comprises of algebraic expressions (or more specifically SQL like criteria) which I use to generate (ANSI) SQL statements which are sent to the db engine. The following lines are examples of what the language statements will look like, and hopefully, it will help further clarify the concept: **Example 1** DQL statement: foobar('yellow') between 1 and 3 and fredstats('weight') > 42 Translation: fetch all rows in an underlying table where computed values for aggregate function foobar() is between 1 and 3 AND computed value for AGG FUNC fredstats() is greater than 42 **Example 2** DQL statement: fredstats('weight') < barneystats('weight') AND foo('fighter') in (9,10,11) AND foobar('green') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 3** DQL statement: foobar('green') / foobar('red') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 4** DQL statement: foobar('green') - foobar('red') >= 42 Translation: Fetch all rows where the specified criteria matches Given the following information: The table upon which the queries above are being executed is called 'tbl' table 'tbl' has the following structure (id int, name varchar(32), weight float) The result set returns only the tbl.id, tbl.name and the names of the aggregate functions as columns in the result set - so for example the foobar() AGG FUNC column will be called foobar in the result set. So for example, the first DQL query will return a result set with the following columns: id, name, foobar, fredstats Given the above, my questions then are: What would be the underlying SQL required for Example1 ? What would be the underlying SQL required for Example3 ? Given an algebraic equation comprising of AGGREGATE functions, Is there a way of generalizing the algorithm needed to generate the required ANSI SQL statement(s)? I am using PostgreSQL as the db, but I would prefer to use ANSI SQL wherever possible.

    Read the article

  • PLPGSQL : How to return a record from function executed by INSERT/UPDATE rule?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • SQL: Speed Improvement - Cluttered union query

    - by vol7ron
    SELECT * FROM ( SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( a.user_id = b.user_id ) UNION SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( lower(a.f_name)=lower(b.f_name) AND lower(a.l_name)=lower(b.l_name) ) ) foo -- UNION -- SELECT a.user_id , a.f_name , a.l_name , '' , '' , '' FROM current_tbl a WHERE a.user_id NOT IN ( select user_id from( SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( a.user_id = b.user_id ) UNION SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( lower(a.f_name)=lower(b.f_name) AND lower(a.l_name)=lower(b.l_name) ) ) bar ) ORDER BY user_id Example of table population: current_tbl: ------------------------------- user_id | f_name | l_name ---------+----------+---------- A1 | Adam | Acorn A2 | Beth | Berry A3 | Calv | Chard | | import_tbl: ------------------------------- user_id | f_name | l_name ---------+----------+---------- A1 | Adam | Acorn A2 | Beth | Butcher <- last_name different | | Expected Output: ----------------------------------------------------------------------- user_id1 | f_name1 | l_name1 | user_id2 | f_name2 | l_name2 ----------+-----------+-----------+------------+-----------+----------- A1 | Adam | Acorn | A1 | Adam | Acorn A2 | Beth | Berry | A2 | Beth | Butcher A3 | Calv | Chard | | | Doing this method gets rid of conditions where the row would be: A2 | Beth | Berry | A2 | Beth | Butcher But it keeps the A3 row I hope this makes sense and I haven't overly simplified it. This is a continuation question from my other question. The succession of these improvements has dropped the query down from ~32000ms to where it's at now ~1200ms - quite an improvement. I supect I can optimize by using UNION ALL in the subquery and of course the usual index optimizations, but I'm looking for the best SQL optimization. FYI this particular case is for PostgreSQL.

    Read the article

  • Asynchronous callback - gwt

    - by sprasad12
    Hi, I am using gwt and postgres for my project. On the front end i have few widgets whose data i am trying to save on to tables at the back-end when i click on "save project" button(this also takes the name for the created project). In the asynchronous callback part i am setting more than one table. But it is not sending the data properly. I am getting the following error: org.postgresql.util.PSQLException: ERROR: insert or update on table "entitytype" violates foreign key constraint "entitytype_pname_fkey" Detail: Key (pname)=(Project Name) is not present in table "project". But when i do the select statement on project table i can see that the project name is present. Here is how the callback part looks like: oksave.addClickHandler(new ClickHandler(){ @Override public void onClick(ClickEvent event) { if(erasync == null) erasync = GWT.create(EntityRelationService.class); AsyncCallback<Void> callback = new AsyncCallback<Void>(){ @Override public void onFailure(Throwable caught) { } @Override public void onSuccess(Void result){ } }; erasync.setProjects(projectname, callback); for(int i = 0; i < boundaryPanel.getWidgetCount(); i++){ top = new Integer(boundaryPanel.getWidget(i).getAbsoluteTop()).toString(); left = new Integer(boundaryPanel.getWidget(i).getAbsoluteLeft()).toString(); if(widgetTitle.startsWith("ATTR")){ type = "regular"; erasync.setEntityAttribute(name1, name, type, top, left, projectname, callback); } else{ erasync.setEntityType(name, top, left, projectname, callback); } } } Question: Is it wrong to set more than one in the asynchronous callback where all the other tables are dependent on a particular table? when i say setProjects in the above code isn't it first completed and then moved on to the next one? Please any input will be greatly appreciated. Thank you.

    Read the article

  • Stored insert procedure in plpgsql

    - by crazyphoton
    I want to do something like this in PostgreSQL. I tried this: CREATE or replace FUNCTION create_patient(_name text, _email text, _phone text, _password text, _field1 text, _field2 text, _field3 timestamp, _field4 text, OUT _pid integer, OUT _id integer) RETURNS record AS $$ DECLARE _id integer; _type text; _pid integer; BEGIN _type := 'patient'; INSERT into patients (name, email, phone, field1, field2, field3) values (_name, _email, _phone, _field1, _field2, _field3) RETURNING id into _pid; INSERT into users (username, password, type, pid, phone, language) values (_email, _password, _type, _pid, _phone, _field4) RETURNING id into _id; END; $$ LANGUAGE plpgsql; But there are a lot of instances where I would not want to specify some of field1/field2/field3/field4 and want the unspecified fields to use the default value in the table. Currently that is not possible, because to call this function I need to specify all fields. TLDR; Is there a simple way to create a wrapper procedure for INSERT in PL/pgSQL where I can specify which fields I want to insert?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >