Search Results

Search found 3721 results on 149 pages for 'postgresql newbie'.

Page 9/149 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • SQLite and PostgreSql longtext

    - by Nik
    Hello all, Does anyone know how to change a column in SQLite and PostgreSQL to LONGTEXT? I have done so in MySQL successfully with: "ALTER TABLE projects MODIFY description LONGTEXT;" But this clause doesn't seem to work on SQLite. I tried hard to find documentation on PostgreSQL, but that site's format really makes people puke. SQLite's website is better but the only command I find relevant, alter table, doesn't seem to support changing column data type at all. ( infact, it doesn't even allow changing column name!!!) Thanks all!

    Read the article

  • Hibernate+PostgreSQL throws JDBCConnectionException: Cannot open connection

    - by Omer Salmanoglu
    false true auto thread org.postgresql.Driver 1234 jdbc:postgresql://localhost/postgres postgres public org.hibernate.dialect.PostgreSQLDialect false true org.hibernate.transaction.JDBCTransactionFactory false false false false 100 When I press "Save" button I get the following exception: javax.servlet.ServletException: org.hibernate.exception.JDBCConnectionException: Cannot open connection javax.faces.webapp.FacesServlet.service(FacesServlet.java:325) root cause javax.faces.el.EvaluationException: org.hibernate.exception.JDBCConnectionException: Cannot open connection javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:102) com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:102) javax.faces.component.UICommand.broadcast(UICommand.java:315) javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:775) javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:1267) com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:82) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) javax.faces.webapp.FacesServlet.service(FacesServlet.java:312) root cause org.hibernate.exception.JDBCConnectionException: Cannot open connection org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:98) org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:52) org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:449) org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:167) org.hibernate.jdbc.JDBCContext.connection(JDBCContext.java:142) org.hibernate.transaction.JDBCTransaction.begin(JDBCTransaction.java:85) org.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:1463) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:344) $Proxy108.beginTransaction(Unknown Source) com.yemex.beans.CompanyBean.saveOrUpdate(CompanyBean.java:52) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) org.apache.el.parser.AstValue.invoke(AstValue.java:196) org.apache.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:276) com.sun.faces.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:98) javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:88) com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:102) javax.faces.component.UICommand.broadcast(UICommand.java:315) javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:775) javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:1267) com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:82) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) javax.faces.webapp.FacesServlet.service(FacesServlet.java:312) root cause java.sql.SQLException: No suitable driver found for jdbc:postgresql://localhost/postgres java.sql.DriverManager.getConnection(Unknown Source) java.sql.DriverManager.getConnection(Unknown Source) org.hibernate.connection.DriverManagerConnectionProvider.getConnection(DriverManagerConnectionProvider.java:133) org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:446) org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:167) org.hibernate.jdbc.JDBCContext.connection(JDBCContext.java:142) org.hibernate.transaction.JDBCTransaction.begin(JDBCTransaction.java:85) org.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:1463) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:344) $Proxy108.beginTransaction(Unknown Source) com.yemex.beans.CompanyBean.saveOrUpdate(CompanyBean.java:52) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) org.apache.el.parser.AstValue.invoke(AstValue.java:196) org.apache.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:276) com.sun.faces.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:98) javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:88) com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:102) javax.faces.component.UICommand.broadcast(UICommand.java:315) javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:775) javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:1267) com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:82) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) javax.faces.webapp.FacesServlet.service(FacesServlet.java:312)

    Read the article

  • Good Front end for PostgreSQL on Windows or Mac

    - by anjanb
    I'm considering using postgreSQL 8.4x db on windows/Mac for development and linux for production. wondering what front-end tools are out there that are comparable to Toad (for Oracle) ? PostgreSQL comes with PgAdminIII. It's OK but I feel there might be something better than that. I prefer free or open source but if something is NOT too expensive(we are a NON-PROFIT), would not mind evaluating. minimal tools that I want 1) connection manager 2) Nice SQL Editor 3) Explain Plan 4) help with SQL embedding into various languages. 5) E-R diagrams would be good but this would not be a killer feature for me. Anything work for you guys ? Will be using java 6 to build the application(s) if that is relevant at all. Thank you,

    Read the article

  • not returning anything from postgresql function?

    - by netllama
    Is it possible for a PostgreSQL plpgsql function to not return anything? I've created a function, and I don't need it to return anything at all, as it performs a complex SQL query, and inserts the results of that query into another table (SELECT INTO ....). Thus, I have no need or interest in having the function return any output or value. Unfortunately, when I try to omit the RETURN clause of the function declaration, I can't create the function. Is it possible for a PostgreSQL plpgsql function to not return anything?

    Read the article

  • What is the best PHP framework for building a website around a heavily relational PostgreSQL databas

    - by Kenaniah
    First of all, the framework of choice needs to have excellent support for PostgreSQL. I don't care about MySQL because it doesn't have half of the features the application I will be porting requires. (And when I say excellent support, I mean that their approach to database drivers has not been solely trained in MySQL). The ideal framework: Should take full advantage of PHP 5.3 and PostgreSQL 8.4 features Should support new technologies such as OpenID and social networking Should support complex relationships between database relations Should have an intelligent validation system Should have a basic library of helpful views (such as pagination, navigation, etc) Should probably be MVC based Should have excellent documentation and an active development community Should namespace classes intelligently What I'm looking for might be more of a library of utilities, as I really don't want to be restricted by the framework in what I can and can not do. I have my own small library of core classes that take care of business logic, and I will most likely want to integrate those with the new framework as well. Thanks!

    Read the article

  • Copying Some from a PostgreSQL Server to Another

    - by whollychao
    I am in need of an application that can periodically transmit select rows from a PostgreSQL database across a network to a second PostgreSQL server. Typically these will be the most recent row added, pulled and transmitted every 10-30 seconds. The primary servers run in a MS Windows environment with a high-latency, and occasionally intermittent, network connection. Therefore, any application would have to be tolerant of this and ideally automatically reconnect / resend data that could not be transmitted. Due to the environment and the requirements, a full-blown replication package would be unnecessary. I appreciate any help anyone has with this problem.

    Read the article

  • n-grams from text in PostgreSQL

    - by harshsinghal
    I am looking to create n-grams from text column in PostgreSQL. I currently split(on white-space) data(sentences) in a text column to an array. select regexp_split_to_array(sentenceData,E'\s+') from tableName Once I have this array, how do I go about: Creating a loop to find n-grams, and write each to a row in another table Using unnest I can obtain all the elements of all the arrays on separate rows, and maybe I can then think of a way to get n-grams from a single column, but I'd loose the sentence boundaries which I wise to preserve. Sample SQL code for PostgreSQL to emulate the above scenario create table tableName(sentenceData text); INSERT INTO tableName(sentenceData) VALUES('This is a long sentence'); INSERT INTO tableName(sentenceData) VALUES('I am currently doing grammar, hitting this monster book btw!'); INSERT INTO tableName(sentenceData) VALUES('Just tonnes of grammar, problem is I bought it in TAIWAN, and so there aint any englihs, just chinese and japanese'); select regexp_split_to_array(sentenceData,E'\s+') from tableName; select unnest(regexp_split_to_array(sentenceData,E'\s+')) from tableName;

    Read the article

  • Rails and PostgreSQL: Role postgres does not exist

    - by Adam
    I have installed postgresql on my Mac OS Lion, and am working on a rails app. I use RVM to keep everything separate from my other rails apps. For some reason when I try to migrate the db for the first time rake cannot find the postgres user. I get the error FATAL: role "postgres" does not exist I have pgAdmin 3 so I can clearly see there is a postgres user in the DB - the admin account in fact - so I'm not sure what else to do. I read somewhere about people having issues with postgresql because of which path it was installed in, but then i don't think i would have gotten that far if it couldn't find the db. Any clues would be gratefully received! Thanks, Adam

    Read the article

  • Seam/Hibernate and PostgreSQL -- Any issues?

    - by Shadowman
    I'm currently working on a project that makes use of Seam/Hibernate (JPA) on MySQL. I'm reconsidering moving towards PostgreSQL after investigating some of the features that it provides. My question is, is there anything I need to worry about with this configuration? Limitations? Gotchas? Things to watch out for? There will be some BLOBs stored in the database (images, X.509 certificates, etc.) Will that be a problem using PostgreSQL? Are there any particular configuration changes or tweaks that I should make in my Hibernate configuration? Thanks for any advice you can give!

    Read the article

  • Postgresql count+sort performance

    - by invictus
    I have built a small inventory system using postgresql and psycopg2. Everything works great, except, when I want to create aggregated summaries/reports of the content, I get really bad performance due to count()'ing and sorting. The DB schema is as follows: CREATE TABLE hosts ( id SERIAL PRIMARY KEY, name VARCHAR(255) ); CREATE TABLE items ( id SERIAL PRIMARY KEY, description TEXT ); CREATE TABLE host_item ( id SERIAL PRIMARY KEY, host INTEGER REFERENCES hosts(id) ON DELETE CASCADE ON UPDATE CASCADE, item INTEGER REFERENCES items(id) ON DELETE CASCADE ON UPDATE CASCADE ); There are some other fields as well, but those are not relevant. I want to extract 2 different reports: - List of all hosts with the number of items per, ordered from highest to lowest count - List of all items with the number of hosts per, ordered from highest to lowest count I have used 2 queries for the purpose: Items with host count: SELECT i.id, i.description, COUNT(hi.id) AS count FROM items AS i LEFT JOIN host_item AS hi ON (i.id=hi.item) GROUP BY i.id ORDER BY count DESC LIMIT 10; Hosts with item count: SELECT h.id, h.name, COUNT(hi.id) AS count FROM hosts AS h LEFT JOIN host_item AS hi ON (h.id=hi.host) GROUP BY h.id ORDER BY count DESC LIMIT 10; Problem is: the queries runs for 5-6 seconds before returning any data. As this is a web based application, 6 seconds are just not acceptable. The database is heavily populated with approximately 50k hosts, 1000 items and 400 000 host/items relations, and will likely increase significantly when (or perhaps if) the application will be used. After playing around, I found that by removing the "ORDER BY count DESC" part, both queries would execute instantly without any delay whatsoever (less than 20ms to finish the queries). Is there any way I can optimize these queries so that I can get the result sorted without the delay? I was trying different indexes, but seeing as the count is computed it is possible to utilize an index for this. I have read that count()'ing in postgresql is slow, but its the sorting that are causing me problems... My current workaround is to run the queries above as an hourly job, putting the result into a new table with an index on the count column for quick lookup. I use Postgresql 9.2.

    Read the article

  • Postgresql error: could not open segment 1 of relation base/20983/2416

    - by kokonut
    I'm running a Postgresql query and getting the following error: ActiveRecord::StatementInvalid (PGError: ERROR: could not open segment 1 of relation base/20983/24161 (target block 5046584): No such file or directory The query is of the format 'SELECT "locations".* FROM "locations" WHERE ("locations"."id" IN (115990, 78330, 77891, 78248, ...)' with about 600 ids in the IN clause - not an optimal query I know but it's what I have to work with for the moment! The server is running PostgreSQL 8.4.6 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 4.4.1-4ubuntu9) 4.4.1, 64-bit. Postgis 1.5 is also installed, and the locations table contains a geometry column. Anyone have any idea what could be causing the error? Thanks!

    Read the article

  • PostgreSQL 9.1 on Ubuntu Lucid fails to start - how to debug?

    - by Tom Fakes
    I'm using Vagrant with Chef Solo to setup a Lucid 64 box. I'm using a Chef recipe to install PostgreSQL 9.1 from Martin Pitt's backports. The install goes ok until the point where the database is started with /etc/init.d/postgresql start There's a log pause and the command fails. If I run pg_ctl manually, the database starts! The entire contents of my postgresql-9.1-main log file is: 2012-05-07 11:01:18 PDT LOG: database system was shut down at 2012-05-07 11:01:16 PDT 2012-05-07 11:01:18 PDT LOG: database system is ready to accept connections 2012-05-07 11:01:18 PDT LOG: autovacuum launcher started 2012-05-07 11:01:18 PDT LOG: incomplete startup packet 2012-05-07 11:01:26 PDT LOG: received fast shutdown request 2012-05-07 11:01:26 PDT LOG: aborting any active transactions 2012-05-07 11:01:26 PDT LOG: autovacuum launcher shutting down 2012-05-07 11:01:26 PDT LOG: shutting down 2012-05-07 11:01:26 PDT LOG: database system is shut down I've tried to change the postgresql config file to get more info into the logfile, but that hasn't worked at all. How do I debug this to find out what is failing so I can fix it?

    Read the article

  • PostgreSQL: Auto-partition a table

    - by Adam Matan
    Hi, I have a huge database which holds pairs of numbers (A,B), each ranging from 0 to 10,000 and stored as floats. e.g., (1, 9984.4), (2143.44, 124.243), (0.55, 0), ... Since the PostgreSQL table which stores these pairs grew quite large, I have decided to partition it into inheriting sub-tables. I intend to create 100 such tables, each storing a range of 1000x1000. The problem is that these numbers tend to come in large chunks of nearby numbers. It means that in the future, some tables will be nearly empty and some will hold a very large portion of the database. Unfortunately, the distribution of future pairs is yet unknown. I am looking for a way to automatically repartition my table. That means that if a certain subtable holds more than a specific number of pairs, it will be automatically partitioned into four sub-sub tables, and so on. My questions are: Is recursive partitioning and inheritance possible in PostgreSQL 8.3? Will indexes and query plans understand it? What's the best way to split a subtable once it grew too large? I should point out that this isn't a live database, so a downtime of few hours every week is totally acceptable. Thanks in advance, Adam

    Read the article

  • Setting the comment of a column to that of another column in Postgresql

    - by dland
    Suppose I create a table in Postgresql with a comment on a column: create table t1 ( c1 varchar(10) ); comment on column t1.c1 is 'foo'; Some time later, I decide to add another column: alter table t1 add column c2 varchar(20); I want to look up the comment contents of the first column, and associate with the new column: select comment_text from (what?) where table_name = 't1' and column_name = 'c1' The (what?) is going to be a system table, but after having looked around in pgAdmin and searching on the web I haven't learnt its name. Ideally I'd like to be able to: comment on column t1.c1 is (select ...); but I have a feeling that's stretching things a bit far. Thanks for any ideas. Update: based on the suggestions I received here, I wound up writing a program to automate the task of transferring comments, as part of a larger process of changing the datatype of a Postgresql column. You can read about that on my blog.

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • Drupal 7 configuration error with Postgresql in Mac OS 10.6.5

    - by Sam
    I am trying to configure Drupal 7 with Postgres. At the database setup step, I get the following error. Warning: PDO::_construct(): [2002] No such file or directory (trying to connect via unix:///var/mysql/mysql.sock) in DatabaseConnection-_construct() (line 300 of /Users/shamod/Sites/drupal/7/includes/database/database.inc). In order for Drupal to work, and to continue with the installation process, you must resolve all issues reported below. For more help with configuring your database server, see the installation handbook. If you are unsure what any of this means you should probably contact your hosting provider. Failed to connect to your database server. The server reports the following message: SQLSTATE[HY000] [2002] No such file or directory. Is the database server running? Does the database exist, and have you entered the correct database name? Have you entered the correct username and password? Have you entered the correct database hostname? NOTE: I am trying to connect to Postgresql but it fails on var/mysql/mysql.sock error. I have setup the database connection string in settings.php for Postgresql. It still does not work. Any idea?

    Read the article

  • Sortie de PostgreSQL 9.2 en version finale : performances et extensibilité accrues, flexibilité orientée développeurs

    Le PostgreSQL Global Development Group annonce la sortie de PostgreSQL 9.2, dernière version en date du système de gestion de bases de données libre de référence. Depuis l'annonce de la version bêta en mai, les développeurs et les intégrateurs louent les avancées en terme de performance, de flexibilité et d'extensibilité. Ils s'attendent à une adoption massive de cette version. [IMG]http://scheu.developpez.com/tutoriels/postgresql/log-shipping/images/logo-pgsql.png[/IMG] « PostgreSQL 9.2 intègre le support natif de JSON, les index couvrants, des performances et une réplication encore améliorées, et beaucoup d'autres fonctionnalités. Nous attendons cette version avec impatience. Elle sera disponible en "Early Access" dès sa...

    Read the article

  • postgres memory allocation tuning 2

    - by pstanton
    i've got a Ubuntu Linux system with 12Gb memory most of which (at least 10Gb) can be allocated solely to postgres. the system also has a 6 disk 15k SCSI RAID 10 setup. The process i'm trying to optimise is twofold. firstly a single threaded, single connection will do many inserts into 2-4 tables linked by foreign key. secondly many different complex queries are run against the resulting data, using group by extensively. this part especially needs to be optimised. i have four of these processes running at once in order to make use of the quad core CPU, therefore there will generally be no more than 5 concurrent connections (1 spare for admin tasks). what configuration changes to the default Postgres config would you recommend? I'm looking for the optimum values for things like work_mem, shared_buffers etc. relevant doco thanks!

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • List stored functions using a table in PostgreSQL

    - by Paolo B.
    Just a quick and simple question: in PostgreSQL, how do you list the names of all stored functions/stored procedures using a table using just a SELECT statement, if possible? If a simple SELECT is insufficient, I can make do with a stored function. My question, I think, is somewhat similar to this other question, but this other question is for SQL Server 2005: http://stackoverflow.com/questions/119679/list-of-stored-procedure-from-table (optional) For that matter, how do you also list the triggers and constraints that use the same table in the same manner?

    Read the article

  • Derby vs PostgreSql Performance Comparison

    - by Austin
    We are doing research right now on whether to switch our postgresql db to an embedded Derby db. Both would be using glassfish 3 for our data layer. Anybody have any opinions or knowledge that could help us decide? Thanks! edit: we are writing some performance tests ourselves right now. Looking for answers more based on experience / first hand knowledge

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >