Search Results

Search found 1505 results on 61 pages for 'postgresql 9 3'.

Page 30/61 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • MySQL dual license behavior

    - by jromero
    Hi SO, I'm running a commercial(closed source) Web App development for the first time. Initially I considered MySQL the most feasible option for a DB, until I get quite confused about its dual license behavior. If I want a commercial application do I still can use the GPL version of MySQL or I must get a license? The same question in a different way: If I use MySQL's GPL version does that force me to license the whole app under GPL? Either case I would go with PostgreSQL, I just want to make really really sure about this. Even in SO I've seen related("duplicates") questions but never a clear answer... All other tools I'm gonna use to code the project are licensed under BSD or MIT. Just in case, the role of MySQL in the project is merely as relational DB to store persistent data and query it. I'd really appreciate if someone can clarify this for me. Regards, thanks in advanced.

    Read the article

  • Postgres pg_dump dumps database in a different order every time

    - by behrk2
    Hello, I am writing a PHP script (which also uses linux bash commands) which will run through test cases by doing the following: I am using a PostgreSQL database (8.4.2)... 1.) Create a DB 2.) Modify the DB 3.) Store a database dump of the DB (pg_dump) 4.) Do regression testing by doing steps 1.) and 2.), and then take another database dump and compare it (diff) with the original database dump from step number 3.) However, I am finding that pg_dump will not always dump the database in the same way. It will dump things in a different order every time. Therefore, when I do a diff on the two database dumps, the comparison will result in the two files being different, when they are actually the same, just in a different order. Is there a different way I can go about doing the pg_dump? Thanks!

    Read the article

  • How Implement a system to determine if a milestone has been reached

    - by Luc M
    I have a table named stats player_id team_id match_date goal assist` 1 8 2010-01-01 1 1 1 8 2010-01-01 2 0 1 9 2010-01-01 0 5 ... I would like to know when a player reach a milestone (eg 100 goals, 100 assists, 500 goals...) I would like to know also when a team reach a milestone. I want to know which player or team reach 100 goals first, second, third... I thought to use triggers with tables to accumulate the totals. Table player_accumulator (and team_accumulator) table would be player_id total_goals total_assists 1 3 6 team_id total_goals total_assists 8 3 1 9 0 5 Each time a row is inserted in stats table, a trigger will insert/update player_accumulator and team_accumulator tables. This trigger could also verify if player or team has reached a milestone in milestone table containing numbers milestone 100 500 1000 ... A table player_milestone would contains milestone reached by player: player_id stat milestone date 1 goal 100 2013-04-02 1 assist 100 2012-11-19 There is a better way to implements a "milestone" ? There is an easiest way without triggers ? I'm using PostgreSQL

    Read the article

  • How to avoid timestamp issue in a long query?

    - by pingi
    Hi, I have the following 2 tables: items: id int primary key bla text events: id_items int num int when timestamp without time zone ble text composite primary key: id_items, num and want to select to each item the most recent event (the newest 'when'). I wrote an request, but I don't know if it could be written more efficiently. Also on PostgreSQL there is a issue with comparing Timestamp objects: 2010-05-08T10:00:00.123 == 2010-05-08T10:00:00.321 so I select with 'MAX(num)' Any thoughts how to make it better? Thanks. SELECT i.*, ea.* FROM items AS i JOIN ( SELECT t.s AS t_s, t.c AS t_c, max(e.num) AS o FROM events AS e JOIN ( SELECT DISTINCT id_item AS s, MAX(when) AS c FROM events GROUP BY s ORDER BY c ) AS t ON t.s = e.id_item AND e.when = t.c GROUP BY t.s, t.c ) AS tt ON tt.t_s = i.id JOIN events AS ea ON ea.id_item = tt.t_s AND ea.cas = tt.t_c AND ea.num = tt.o;

    Read the article

  • How do I change the effective user of psql?

    - by gvkv
    I'm using psql to run a simple set of COPY statements contained in a file: psql -d mydb -f 'wbf_queries.data.sql' where wbf_queries.data.sql contains lines: copy <my_query> to '/home/gvkv/mydata' delimiter ',' null ''; ... but I get a permission denied error: ... ERROR: could not open file ... for writing: Permission denied I'm connecting under my user account (gvkv) which is also a superuser in PostgreSQL. Obviously, psql is running under a different (effective) user but I don't know how to change this. Can it be done within psql or do I need some unix-fu?

    Read the article

  • Question about joins and table with Millions of rows

    - by xRobot
    I have to create 2 tables: Magazine ( 10 millions of rows with these columns: id, title, genres, printing, price ) Author ( 180 millions of rows with these columns: id, name, magazine_id ) . Every author can write on ONLY ONE magazine and every magazine has more authors. So if I want to know all authors of Motors Magazine, I have to use this query: SELECT * FROM Author, Magazine WHERE ( Author.id = Magazine.id ) AND ( genres = 'Motors' ) The same applies to Printing and Price column. To avoid these joins with tables of millions of rows, I thought to use this tables: Magazine ( 10 millions of rows with this column: id, title, genres, printing, price ) Author ( 180 millions of rows with this column: id, name, magazine_id, genres, printing, price ) . and this query: SELECT * FROM Author WHERE genres = 'Motors' Is it a good approach ? I can use Postgresql or Mysql.

    Read the article

  • Anyone using ASP.NET MembershipProvider with Nhibernate?

    - by JLago
    Hi, I'm trying to implement Membership controls in a mvc 2 application and i'm having trouble dealing with the MembershipUser class. I have my own data store (in Postgresql) and I'm using Nhibernate to deal with it from C#. The thing is, I have my own user class, but I can't use it with any provider I found that implements Membership, because all the functions return the predefined MembershipUser class and cannot return my own. I'm losing my mind here, is there any way i can work with this, or should I implement everything myself? thanks in advance!

    Read the article

  • Conditionally set a column to its default value in Postgres

    - by Evgeny
    I've got a PostgreSQL 8.4 table with an auto-incrementing, but nullable, integer column. I want to update some column values and, if this column is NULL then set it to its default value (which would be an integer auto-generated from a sequence), but I want to return its value in either case. So I want something like this: UPDATE mytable SET incident_id = COALESCE(incident_id, DEFAULT), other = 'somethingelse' WHERE ... RETURNING incident_id Unfortunately, this doesn't work - it seems that DEFAULT is special and cannot be part of an expression. What's the best way to do this?

    Read the article

  • foreign key constraints on primary key columns - issues ?

    - by zzzeek
    What are the pros/cons from a performance/indexing/data management perspective of creating a one-to-one relationship between tables using the primary key on the child as foreign key, versus a pure surrogate primary key on the child? The first approach seems to reduce redundancy and nicely constrains the one-to-one implicitly, while the second approach seems to be favored by DBAs, even though it creates a second index: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key references parent(id), data varchar(50) ) pure surrogate key: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key, parent_id integer unique references parent(id), data varchar(50) ) the platforms of interest here are Postgresql, Microsoft SQL Server.

    Read the article

  • bytea type & nulls, Postgres

    - by Thanatos
    I'm using a bytea type in PostgreSQL, which, to my understanding, contains just a series of bytes. However, I can't get it to play well with nulls. For example: =# select length(E'aa\x00aa'::bytea); length -------- 2 (1 row) I was expecting 5. Also: =# select md5(E'aa\x00aa'::bytea); md5 ---------------------------------- 4124bc0a9335c27f086f24ba207a4912 (1 row) That's the MD5 of "aa", not "aa\x00aa". Clearly, I'm Doing It Wrong, but I don't know what I'm doing wrong. I'm also on an older version of Postgres (8.1.11) for reasons outside of my control. (I'll see if this behaves the same on the latest Postgres as soon as I get home...)

    Read the article

  • Cannot save model due to bad transaction? Django

    - by Kenneth Love
    Trying to save a model in Django admin and I keep getting the error: Transaction managed block ended with pending COMMIT/ROLLBACK I tried restarting both the Django (1.2) and PostgreSQL (8.4) processes but nothing changed. I added "autocommit": True to my database settings but that didn't change anything either. Everything that Google has turned up has either not been answered or the answer involved not having records in the users table, which I definitely have. The model does not have a custom save method and there are no pre/post save signals tied to it. Any ideas or anything else I can provide to make answering this easier?

    Read the article

  • DataMapper: using auto_migrate! with many-to-many dependencies?

    - by pschuegr
    Hi, I'm trying to migrate my app from MySql to Postgresql, using Rails3-pre and the latest DataMapper. I have several models which are related through many-to-many relationships using :through = Resource, which means that DataMapper creates a join table with foreign keys for both models. I can't auto_migrate! these changes, because I keep getting this: ERROR: cannot drop table users because other objects depend on it DETAIL: constraint artist_users_owner_fk on table artist_users depends on table users constraint site_users_owner_fk on table site_users depends on table users HINT: Use DROP ... CASCADE to drop the dependent objects too. I have tried everything I can think of, and thought I had things working when I added :constraint = :skip to the field definition, but I keep getting that error back when I try and run auto_migrate. I thought that :skip meant that it would ignore the dependents, but maybe that only applies for deleting rows and not dropping tables? I should mention that I can run auto_migrate after i nuke the db once, but after that, errors. Any suggestions or advice much appreciated.

    Read the article

  • SQL update fields of one table from fields of another one.

    - by Nir
    I'm having two tables: A [ID, column1, column2, column3] B [ID, column1, column2, column3, column4] A table will always be subset of B table (meaning all columns of A are also in B). I want to update a record with a specific ID in B with their data from A for all columns of A. This ID exists both in A and B. Is there an UPDATE syntax or any other way to do that without specifying the column names, just saying "set all columns of A"? I'm using postgresql, so a specific non-standard command is also accepted (however, not preferred). Thanks.

    Read the article

  • Get the highest odds from the last update

    - by Frankie Yale
    I have these tables in a PostgreSQL database: bookmakers ----------------------- | id | name | ----------------------- | 1 | Unibet | ----------------------- | 2 | 888 | ----------------------- odds --------------------------------------------------------------------- | id | odds_type | odds_index | bookmaker_id | created_at | --------------------------------------------------------------------- | 1 | 1 | 1.55 | 1 | 2012-06-02 10:30 | --------------------------------------------------------------------- | 2 | 2 | 3.22 | 2 | 2012-06-02 10:30 | --------------------------------------------------------------------- | 3 | X | 3.00 | 1 | 2012-06-02 10:30 | --------------------------------------------------------------------- | 4 | 2 | 1.25 | 1 | 2012-05-27 09:30 | --------------------------------------------------------------------- | 5 | 1 | 2.30 | 2 | 2012-05-27 09:30 | --------------------------------------------------------------------- | 6 | X | 2.00 | 2 | 2012-05-27 09:30 | --------------------------------------------------------------------- What I am trying to query is the following: Give me the 1/X/2 odds from the latest update (created_at) from ALL bookmakers and from that last update, give me the highest odds for each odds_type ('1', '2', 'X'). On my website I display them as: Best odds right now: 1 | X | 2 -------------------- 2.30 | 3.00 | 3.22 I have to first get the latest, because the odds from the update from yesterday are no longer valid. Then from that last update, I have - in this case - 2 odds from 2 different bookmakers, so I need to get the best one for type '1','2','X'. Pseudo SQL would be something like: SELECT MAX(odds_index) WHERE odds_type = '1' ORDER BY created_at DESC, odds_index DESC But that doesn't work, because I would always get the latest odds (and not the highest/best from those latest) I hope I'm making sense.

    Read the article

  • Exploring search options for PHP

    - by Joshua
    I have innoDB table using numerous foreign keys, but we just want to look up some basic info out of it. I've done some research but still lost. 1) How can I tell if my host has Sphinx installed already? I don't see it as an option for table storage method (i.e. innodb, myisam). 2) Zend_Search_Lucene, responsive enough for AJAX functionality of millions of records? 3) Mirror my innoDB with a myisam? Make every innodb transaction end with a write to the myisam, then use 1:1 lookups? How would I do this automagically? This should make MyISAM ACID-compliant and free(er) from corruption no? 4) PostgreSQL fulltext queries don't even look like SQL to me wtf, I don't have time to learn a new SQL syntax I need noob options 5) ???????????????????? This is high volume site on a decently-equipped VPS Thanks very much for any ideas.

    Read the article

  • Dirty Reads in Postgres

    - by User1
    I have a long running function that should be inserting new rows. How do I check the progress of this function? I was thinking dirty reads would work so I read http://www.postgresql.org/docs/8.4/interactive/sql-set-transaction.html and came up with the following code and ran it in a new session: SET SESSION CHARACTERISTICS AS SERIALIZABLE; SELECT * FROM MyTable; Postgres gives me a syntax error. What am I doing wrong? If I do it right, will I see the inserted records while that long function is still running? Thanks

    Read the article

  • Display another field in the referenced table for multiple columns with performance issues in mind

    - by israkir
    I have a table of edge like this: ------------------------------- | id | arg1 | relation | arg2 | ------------------------------- | 1 | 1 | 3 | 4 | ------------------------------- | 2 | 2 | 6 | 5 | ------------------------------- where arg1, relation and arg2 reference to the ids of objects in another object table: -------------------- | id | object_name | -------------------- | 1 | book | -------------------- | 2 | pen | -------------------- | 3 | on | -------------------- | 4 | table | -------------------- | 5 | bag | -------------------- | 6 | in | -------------------- What I want to do is that, considering performance issues (a very big table more than 50 million of entries) display the object_name for each edge entry rather than id such as: --------------------------- | arg1 | relation | arg2 | --------------------------- | book | on | table | --------------------------- | pen | in | bag | --------------------------- What is the best select query to do this? Also, I am open to suggestions for optimizing the query - adding more index on the tables etc... EDIT: Based on the comments below: 1) @Craig Ringer: PostgreSQL version: 8.4.13 and only index is id for both tables. 2) @andrefsp: edge is almost x2 times bigger than object.

    Read the article

  • Trying to modify a constraint in PostgresSQL

    - by MISMajorDeveloperAnyways
    Postgres is getting quite annoying lately. I have checked the documentation provided by Oracle and found a way to do this without dropping the table. Problem is, it errors out at modify as it does not recognize the keyword. Using EMS SQL Manager for PostgreSQL. Alter table public.public_insurer_credit MODIFY CONSTRAINT public_insurer_credit_fk1 deferrable, initially deferred; I was able to work around it by dropping the constraint using : ALTER TABLE "public"."public_insurer_credit" DROP CONSTRAINT "public_insurer_credit_fk1" RESTRICT; ALTER TABLE "public"."public_insurer_credit" ADD CONSTRAINT "public_insurer_credit_fk1" FOREIGN KEY ("branch_id", "order_id", "public_insurer_id") REFERENCES "public"."order_public_insurer"("branch_id", "order_id", "public_insurer_id") ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED;

    Read the article

  • Is it possible to use Django's testing framework without having CREATE DATABASE rights?

    - by superjoe30
    Since I don't have a hundred bazillion dollars, my Django app lives on a shared host, where all kinds of crazy rules are in effect. Fortunately, they gave me shell access, which has allowed me to kick butts and take names. However I can't do anything about not having CREATE DATABASE rights. I'm using postgresql and have a killer test suite, but am unable to run it due to the code not being able to create a new database. However I am able to create said database beforehand via cPanel and use it with Django. I just don't have CREATE DATABASE rights. Is there a way I can still run my test suite?

    Read the article

  • Architecture for multiple web apps and databases.

    - by Matt
    We used to have only one web app, but now we are breaking it down into multiple ones. Each one will be packaged as separate product (web app) Some have things in common some do not. It was originally coded with php and using Postgresql 8.4 and CodeIgniter as the framework. I am looking for some good suggestions on how I should set up multiple web apps. They all have their own somewhat unique data. Some data in the databases can be common to some apps but not all. All the apps will be on one server and will have some kind of API to manipulate data. I want it to be structured such that one User account can access any product they purchase. (kinda like google accounts) I do not know if its a good idea to have multiple database, or just to have one big one. eventually we will be using S3 for some videos and other images. Your thoughts and suggestions are much appreciated.

    Read the article

  • Psycopg2 doesn't like table names that start with a lower case letter

    - by Count Boxer
    I am running ActiveState's ActivePython 2.6.5.12 and PostgreSQL 9.0 Beta 1 under Windows XP. If I create a table with an upper case first letter (i.e. Books), psycopg2 returns the "Programming Error: relation "books" does not exist" error message when I run the select statement: execute("SELECT * FROM Books"). The same error is returned if I run: execute("SELECT * FROM books"). However, if I change the table to a lower case first name (i.e. books), then either of the above statements works. Are tables name supposed to have a lower case first name? Is this a setting or a feature or a bug? Am I missing something obvious?

    Read the article

  • Finding group maxes in SQL join result

    - by Gene
    Two SQL tables. One contestant has many entries: Contestants Entries Id Name Id Contestant_Id Score -- ---- -- ------------- ----- 1 Fred 1 3 100 2 Mary 2 3 22 3 Irving 3 1 888 4 Grizelda 4 4 123 5 1 19 6 3 50 Low score wins. Need to retrieve current best scores of all contestants ordered by score: Best Entries Report Name Entry_Id Score ---- -------- ----- Fred 5 19 Irving 2 22 Grizelda 4 123 I can certainly get this done with many queries. My question is whether there's a way to get the result with one, efficient SQL query. I can almost see how to do it with GROUP BY, but not quite. In case it's relevant, the environment is Rails ActiveRecord and PostgreSQL.

    Read the article

  • get n records at a time from a temporary table

    - by Claudiu
    I have a temporary table with about 1 million entries. The temporary table stores the result of a larger query. I want to process these records 1000 at a time, for example. What's the best way to set up queries such that I get the first 1000 rows, then the next 1000, etc.? They are not inherently ordered, but the temporary table just has one column with an ID, so I can order it if necessary. I was thinking of creating an extra column with the temporary table to number all the rows, something like: CREATE TEMP TABLE tmptmp AS SELECT ##autonumber somehow##, id FROM .... --complicated query then I can do: SELECT * FROM tmptmp WHERE autonumber>=0 AND autonumber < 1000 etc... how would I actually accomplish this? Or is there a better way? I'm using Python and PostgreSQL.

    Read the article

  • create temporary table from cursor

    - by Claudiu
    Is there any way, in PostgreSQL accessed from Python using SQLObject, to create a temporary table from the results of a cursor? Previously, I had a query, and I created the temporary table directly from the query. I then had many other queries interacting w/ that temporary table. Now I have much more data, so I want to only process 1000 rows at a time or so. However, I can't do CREATE TEMP TABLE ... AS ... from a cursor, not as far as I can see. Is the only thing to do something like: rows = cur.fetchmany(1000); cur2 = conn.cursor() cur2.execute("""CREATE TEMP TABLE foobar (id INTEGER)""") for row in rows: cur2.execute("""INSERT INTO foobar (%d)""" % row) or is there a better way? This seems awfully inefficient.

    Read the article

  • rake test and test_structure.sql

    - by korinthe
    First of all, I have to run "rake RAILS_ENV=test ..." to get the test suites to hit my test DB. Annoying but ok to live with. However when I do so, I get a long stream of errors like so: > rake RAILS_ENV=test -I test test:units psql:/path/to/project/db/test_structure.sql:33: ERROR: function "armor" already exists with same argument types [and many more] It looks like some DB definitions are getting unnecessarily reloaded. I can't find any mention of this on Google, so I was wondering whether others have seen this? I am using a PostgreSQL database with the following in my environment.rb: config.active_record.schema_format = :sql and using Rails 2.3.5 with rake 0.8.7.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >