Search Results

Search found 1505 results on 61 pages for 'postgresql 9 0'.

Page 9/61 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Seam/Hibernate and PostgreSQL -- Any issues?

    - by Shadowman
    I'm currently working on a project that makes use of Seam/Hibernate (JPA) on MySQL. I'm reconsidering moving towards PostgreSQL after investigating some of the features that it provides. My question is, is there anything I need to worry about with this configuration? Limitations? Gotchas? Things to watch out for? There will be some BLOBs stored in the database (images, X.509 certificates, etc.) Will that be a problem using PostgreSQL? Are there any particular configuration changes or tweaks that I should make in my Hibernate configuration? Thanks for any advice you can give!

    Read the article

  • Postgresql count+sort performance

    - by invictus
    I have built a small inventory system using postgresql and psycopg2. Everything works great, except, when I want to create aggregated summaries/reports of the content, I get really bad performance due to count()'ing and sorting. The DB schema is as follows: CREATE TABLE hosts ( id SERIAL PRIMARY KEY, name VARCHAR(255) ); CREATE TABLE items ( id SERIAL PRIMARY KEY, description TEXT ); CREATE TABLE host_item ( id SERIAL PRIMARY KEY, host INTEGER REFERENCES hosts(id) ON DELETE CASCADE ON UPDATE CASCADE, item INTEGER REFERENCES items(id) ON DELETE CASCADE ON UPDATE CASCADE ); There are some other fields as well, but those are not relevant. I want to extract 2 different reports: - List of all hosts with the number of items per, ordered from highest to lowest count - List of all items with the number of hosts per, ordered from highest to lowest count I have used 2 queries for the purpose: Items with host count: SELECT i.id, i.description, COUNT(hi.id) AS count FROM items AS i LEFT JOIN host_item AS hi ON (i.id=hi.item) GROUP BY i.id ORDER BY count DESC LIMIT 10; Hosts with item count: SELECT h.id, h.name, COUNT(hi.id) AS count FROM hosts AS h LEFT JOIN host_item AS hi ON (h.id=hi.host) GROUP BY h.id ORDER BY count DESC LIMIT 10; Problem is: the queries runs for 5-6 seconds before returning any data. As this is a web based application, 6 seconds are just not acceptable. The database is heavily populated with approximately 50k hosts, 1000 items and 400 000 host/items relations, and will likely increase significantly when (or perhaps if) the application will be used. After playing around, I found that by removing the "ORDER BY count DESC" part, both queries would execute instantly without any delay whatsoever (less than 20ms to finish the queries). Is there any way I can optimize these queries so that I can get the result sorted without the delay? I was trying different indexes, but seeing as the count is computed it is possible to utilize an index for this. I have read that count()'ing in postgresql is slow, but its the sorting that are causing me problems... My current workaround is to run the queries above as an hourly job, putting the result into a new table with an index on the count column for quick lookup. I use Postgresql 9.2.

    Read the article

  • Postgresql error: could not open segment 1 of relation base/20983/2416

    - by kokonut
    I'm running a Postgresql query and getting the following error: ActiveRecord::StatementInvalid (PGError: ERROR: could not open segment 1 of relation base/20983/24161 (target block 5046584): No such file or directory The query is of the format 'SELECT "locations".* FROM "locations" WHERE ("locations"."id" IN (115990, 78330, 77891, 78248, ...)' with about 600 ids in the IN clause - not an optimal query I know but it's what I have to work with for the moment! The server is running PostgreSQL 8.4.6 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 4.4.1-4ubuntu9) 4.4.1, 64-bit. Postgis 1.5 is also installed, and the locations table contains a geometry column. Anyone have any idea what could be causing the error? Thanks!

    Read the article

  • PostgreSQL 9.1 on Ubuntu Lucid fails to start - how to debug?

    - by Tom Fakes
    I'm using Vagrant with Chef Solo to setup a Lucid 64 box. I'm using a Chef recipe to install PostgreSQL 9.1 from Martin Pitt's backports. The install goes ok until the point where the database is started with /etc/init.d/postgresql start There's a log pause and the command fails. If I run pg_ctl manually, the database starts! The entire contents of my postgresql-9.1-main log file is: 2012-05-07 11:01:18 PDT LOG: database system was shut down at 2012-05-07 11:01:16 PDT 2012-05-07 11:01:18 PDT LOG: database system is ready to accept connections 2012-05-07 11:01:18 PDT LOG: autovacuum launcher started 2012-05-07 11:01:18 PDT LOG: incomplete startup packet 2012-05-07 11:01:26 PDT LOG: received fast shutdown request 2012-05-07 11:01:26 PDT LOG: aborting any active transactions 2012-05-07 11:01:26 PDT LOG: autovacuum launcher shutting down 2012-05-07 11:01:26 PDT LOG: shutting down 2012-05-07 11:01:26 PDT LOG: database system is shut down I've tried to change the postgresql config file to get more info into the logfile, but that hasn't worked at all. How do I debug this to find out what is failing so I can fix it?

    Read the article

  • PostgreSQL: Auto-partition a table

    - by Adam Matan
    Hi, I have a huge database which holds pairs of numbers (A,B), each ranging from 0 to 10,000 and stored as floats. e.g., (1, 9984.4), (2143.44, 124.243), (0.55, 0), ... Since the PostgreSQL table which stores these pairs grew quite large, I have decided to partition it into inheriting sub-tables. I intend to create 100 such tables, each storing a range of 1000x1000. The problem is that these numbers tend to come in large chunks of nearby numbers. It means that in the future, some tables will be nearly empty and some will hold a very large portion of the database. Unfortunately, the distribution of future pairs is yet unknown. I am looking for a way to automatically repartition my table. That means that if a certain subtable holds more than a specific number of pairs, it will be automatically partitioned into four sub-sub tables, and so on. My questions are: Is recursive partitioning and inheritance possible in PostgreSQL 8.3? Will indexes and query plans understand it? What's the best way to split a subtable once it grew too large? I should point out that this isn't a live database, so a downtime of few hours every week is totally acceptable. Thanks in advance, Adam

    Read the article

  • Setting the comment of a column to that of another column in Postgresql

    - by dland
    Suppose I create a table in Postgresql with a comment on a column: create table t1 ( c1 varchar(10) ); comment on column t1.c1 is 'foo'; Some time later, I decide to add another column: alter table t1 add column c2 varchar(20); I want to look up the comment contents of the first column, and associate with the new column: select comment_text from (what?) where table_name = 't1' and column_name = 'c1' The (what?) is going to be a system table, but after having looked around in pgAdmin and searching on the web I haven't learnt its name. Ideally I'd like to be able to: comment on column t1.c1 is (select ...); but I have a feeling that's stretching things a bit far. Thanks for any ideas. Update: based on the suggestions I received here, I wound up writing a program to automate the task of transferring comments, as part of a larger process of changing the datatype of a Postgresql column. You can read about that on my blog.

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • Drupal 7 configuration error with Postgresql in Mac OS 10.6.5

    - by Sam
    I am trying to configure Drupal 7 with Postgres. At the database setup step, I get the following error. Warning: PDO::_construct(): [2002] No such file or directory (trying to connect via unix:///var/mysql/mysql.sock) in DatabaseConnection-_construct() (line 300 of /Users/shamod/Sites/drupal/7/includes/database/database.inc). In order for Drupal to work, and to continue with the installation process, you must resolve all issues reported below. For more help with configuring your database server, see the installation handbook. If you are unsure what any of this means you should probably contact your hosting provider. Failed to connect to your database server. The server reports the following message: SQLSTATE[HY000] [2002] No such file or directory. Is the database server running? Does the database exist, and have you entered the correct database name? Have you entered the correct username and password? Have you entered the correct database hostname? NOTE: I am trying to connect to Postgresql but it fails on var/mysql/mysql.sock error. I have setup the database connection string in settings.php for Postgresql. It still does not work. Any idea?

    Read the article

  • Sortie de PostgreSQL 9.2 en version finale : performances et extensibilité accrues, flexibilité orientée développeurs

    Le PostgreSQL Global Development Group annonce la sortie de PostgreSQL 9.2, dernière version en date du système de gestion de bases de données libre de référence. Depuis l'annonce de la version bêta en mai, les développeurs et les intégrateurs louent les avancées en terme de performance, de flexibilité et d'extensibilité. Ils s'attendent à une adoption massive de cette version. [IMG]http://scheu.developpez.com/tutoriels/postgresql/log-shipping/images/logo-pgsql.png[/IMG] « PostgreSQL 9.2 intègre le support natif de JSON, les index couvrants, des performances et une réplication encore améliorées, et beaucoup d'autres fonctionnalités. Nous attendons cette version avec impatience. Elle sera disponible en "Early Access" dès sa...

    Read the article

  • postgres memory allocation tuning 2

    - by pstanton
    i've got a Ubuntu Linux system with 12Gb memory most of which (at least 10Gb) can be allocated solely to postgres. the system also has a 6 disk 15k SCSI RAID 10 setup. The process i'm trying to optimise is twofold. firstly a single threaded, single connection will do many inserts into 2-4 tables linked by foreign key. secondly many different complex queries are run against the resulting data, using group by extensively. this part especially needs to be optimised. i have four of these processes running at once in order to make use of the quad core CPU, therefore there will generally be no more than 5 concurrent connections (1 spare for admin tasks). what configuration changes to the default Postgres config would you recommend? I'm looking for the optimum values for things like work_mem, shared_buffers etc. relevant doco thanks!

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • List stored functions using a table in PostgreSQL

    - by Paolo B.
    Just a quick and simple question: in PostgreSQL, how do you list the names of all stored functions/stored procedures using a table using just a SELECT statement, if possible? If a simple SELECT is insufficient, I can make do with a stored function. My question, I think, is somewhat similar to this other question, but this other question is for SQL Server 2005: http://stackoverflow.com/questions/119679/list-of-stored-procedure-from-table (optional) For that matter, how do you also list the triggers and constraints that use the same table in the same manner?

    Read the article

  • Derby vs PostgreSql Performance Comparison

    - by Austin
    We are doing research right now on whether to switch our postgresql db to an embedded Derby db. Both would be using glassfish 3 for our data layer. Anybody have any opinions or knowledge that could help us decide? Thanks! edit: we are writing some performance tests ourselves right now. Looking for answers more based on experience / first hand knowledge

    Read the article

  • PostgreSQL Crosstab Query

    - by schone
    Hi all, Does any one know how to create crosstab queries in PostgreSQL? For example I have the following table: Section Status Count A Active 1 A Inactive 2 B Active 4 B Inactive 5 I would like the query to return the following crosstab: Section Active Inactive A 1 2 B 4 5 Is this possible? Thanks!

    Read the article

  • SQL Server to PostgreSQL - Migration and design concerns

    - by youwhut
    Currently migrating from SQL Server to PostgreSQL and attempting to improve a couple of key areas on the way: I have an Articles table: CREATE TABLE [dbo].[Articles]( [server_ref] [int] NOT NULL, [article_ref] [int] NOT NULL, [article_title] [varchar](400) NOT NULL, [category_ref] [int] NOT NULL, [size] [bigint] NOT NULL ) Data (comma delimited text files) is dumped on the import server by ~500 (out of ~1000) servers on a daily basis. Importing: Indexes are disabled on the Articles table. For each dumped text file Data is BULK copied to a temporary table. Temporary table is updated. Old data for the server is dropped from the Articles table. Temporary table data is copied to Articles table. Temporary table dropped. Once this process is complete for all servers the indexes are built and the new database is copied to a web server. I am reasonably happy with this process but there is always room for improvement as I strive for a real-time (haha!) system. Is what I am doing correct? The Articles table contains ~500 million records and is expected to grow. Searching across this table is okay but could be better. i.e. SELECT * FROM Articles WHERE server_ref=33 AND article_title LIKE '%criteria%' has been satisfactory but I want to improve the speed of searching. Obviously the "LIKE" is my problem here. Suggestions? SELECT * FROM Articles WHERE article_title LIKE '%criteria%' is horrendous. Partitioning is a feature of SQL Server Enterprise but $$$ which is one of the many exciting prospects of PostgreSQL. What performance hit will be incurred for the import process (drop data, insert data) and building indexes? Will the database grow by a huge amount? The database currently stands at 200 GB and will grow. Copying this across the network is not ideal but it works. I am putting thought into changing the hardware structure of the system. The thought process of having an import server and a web server is so that the import server can do the dirty work (WITHOUT indexes) while the web server (WITH indexes) can present reports. Maybe reducing the system down to one server would work to skip the copying across the network stage. This one server would have two versions of the database: one with the indexes for delivering reports and the other without for importing new data. The databases would swap daily. Thoughts? This is a fantastic system, and believe it or not there is some method to my madness by giving it a big shake up. UPDATE: I am not looking for help with relational databases, but hoping to bounce ideas around with data warehouse experts.

    Read the article

  • PostgreSQL custom exceptions?

    - by Steve F
    In Firebird we can declare custom exceptions like so: CREATE EXCEPTION EXP_CUSTOM_0 'Exception: Custom exception'; these are stored at the database level. In stored procedures, we can raise the exception like so: EXCEPTION EXP_CUSTOM_0 ; Is there an equivalent in PostgreSQL ?

    Read the article

  • Load SQL dump in PostgreSQL without the password dependancy

    - by Cédric Girard
    Hi, I want my unit tests suite to load a SQL file in my database. I use a command like "C:\Program Files\PostgreSQL\8.3\bin"\psql --host 127.0.0.1 --dbname unitTests --file C:\ZendStd\www\voo4\trunk\resources\sql\base_test_projectx.pg.sql --username postgres 2>&1 It run fine in command line, but need me to have a pgpass.conf Since I need to run unit tests suite on each of development PC, and on development server I want to simplify the deployment process. Is there any command line wich include password? Thanks, Cédric

    Read the article

  • Ruby on Rails / PostgreSQL - Library not Loaded error when starting server- libq.5.dylib

    - by Mike McCoy
    I have app that is running Ruby 1.9.2, Rails 3, and postgreSQL 8.3. It was originally setup and working with postgreSQL 9.1, but I uninstalled 9.1 and installed and changed to 8.3 insure compatibility on a Heroku shared database setup. It was running ok, but it's not now Now, when working on this app, when I run a database upgrade I get this error: dlopen(/Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle, 9): Library not loaded: libpq.5.dylib Referenced from: /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle Reason: no suitable image found. Did find: /usr/lib/libpq.5.dylib: no matching architecture in universal wrapper - /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle And when I try to run the server I get this error message: /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg.rb:4:in `require': dlopen(/Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle, 9): Library not loaded: libpq.5.dylib (LoadError) Referenced from: /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle Reason: no suitable image found. Did find: /usr/lib/libpq.5.dylib: no matching architecture in universal wrapper - /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg.rb:4:in `<top (required)>' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:68:in `require' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:68:in `block (2 levels) in require' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:66:in `each' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:66:in `block in require' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:55:in `each' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:55:in `require' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler.rb:122:in `require' from /Users/michaeljmccoy/www/mikemccoy/config/application.rb:7:in `<top (required)>' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/railties-3.2.0.rc2/lib/rails/commands.rb:53:in `require' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/railties-3.2.0.rc2/lib/rails/commands.rb:53:in `block in <top (required)>' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/railties-3.2.0.rc2/lib/rails/commands.rb:50:in `tap' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/railties-3.2.0.rc2/lib/rails/commands.rb:50:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>' I know they are very similar errors and probably has to do with a missing path. However, when I add the path to my .profile file and restart the terminal window, I get the same errors.

    Read the article

  • Schema qualified tables with SQLAlchemy, SQLite and Postgresql?

    - by Chris Reid
    I have a Pylons project and a SQLAlchemy model that implements schema qualified tables: class Hockey(Base): __tablename__ = "hockey" __table_args__ = {'schema':'winter'} hockey_id = sa.Column(sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True) baseball_id = sa.Column(sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id')) This code works great with Postgresql but fails when using SQLite on table and foreign key names (due to SQLite's lack of schema support) sqlalchemy.exc.OperationalError: (OperationalError) unknown database "winter" 'PRAGMA "winter".table_info("hockey")' () I'd like to continue using SQLite for dev and testing. Is there a way of have this fail gracefully on SQLite?

    Read the article

  • Registration Form check PostgreSQL is username is already taken

    - by MrEnder
    Hey I'm really new to PHP and PostgreSQP or any database in that matter. So I'm at a loss how to do this. I need an if statement that says. If(the username user just typed in is already in database) { my code here } the variable the username that the user just typed in is $userNameSignup how would I do that with PHP for PostgreSQL? also how might I redirect people to a new page once they have completed the form properly? Thanks Shelby

    Read the article

  • Updating specific area of table in postgresql

    - by MrEnder
    Ok I am trying to update a specific area of a table in postgresql I want it to find the user that goes along with the table and then update the information I have like in this case the email is the user name that it needs to look for. it needs to add in areas like $aboutSelf, $hobbies, $music, $tv, $sports so ya I have no idea how to do this lol ^.^ I only know how to add stuff from scratch. like create a non existing user

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >