Search Results

Search found 1922 results on 77 pages for 'postgresql contrib'.

Page 22/77 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • database on SSD: data only or the DBM program too?

    - by simone
    I plan on moving the data I use for statistical analysis (100-ish Gb) onto an SSD. The data is either sqlite single-file db's, or postgresql-managed data. The SSD is 240 Gb, 550 MB/s read and 520 MB/s write. Should I reserve that space for the data only, or would it be a good idea to install the operating system (Mac OS X) and the application directory (Adobe Suite, Microsoft Office and the like) on the SSD too? And would it make a substantial speed difference whether I also install the postgresql binaries on the SSD? I have plenty of other space (another 300Gb hard-drive, and a 1Tb one). Don't know the features of the non-SSD drives, though they're our standard equipment on all Macs, and they're definitely OK. Thanks.

    Read the article

  • What's the difference between sudo su - postgres and sudo -u postgres?

    - by Craig Ringer
    PostgreSQL users peer authentication on unix sockets by default, where the unix user must be the same as the PostgreSQL user. So people frequently use su or sudo to become the postgres superuser. I often see people using constructs like: sudo su - postgres rather than sudo -u postgres -i and I'm wondering why. Similarly, I've seen: sudo su - postgres -c psql instead of sudo -u postgres psql Without the leading sudo the su versions would make some sense if you were on an old platform without sudo. But why on a less than prehisoric UNIX or Linux would you use sudo su ?

    Read the article

  • Specifying a source in puppet doesn't seem to work

    - by Mr Wilde
    I have been attempting to create a manifest for installing postgres 9.1 using puppet on a Centos 5 server. I have been trying to adapt the instructions at http://wiki.postgresql.org/wiki/YUM_Installation in order to achieve this and when I go through a manual process, I have been able to. It would seem to me therefore that a puppet manifest containing package { 'postgresql91-server': ensure => installed, source => 'http://yum.postgresql.org/9.1/redhat/rhel-5-x86_64/pgdg-centos91-9.1-4.noarch.rpm' } however on attempting to apply this manifest I get err: /Stage[main]//Package[postgresql91-server]/ensure: change from absent to present failed: Could not find package postgresql91-server Any expert puppeteers out there able to help me?

    Read the article

  • How to scale out OpenStreetMap data efficiently

    - by Pierre
    For over a year now, I'm running an in-house PostGIS server filled with OSM data, used for both Mapnik-based tile generation and Nominatim-based geocoding, updated with day replicates. This works pretty well. However, as usage is growing exponentially, I would like to achieve better reliability and performance by adding additional PostgreSQL servers. And I'm kind of lost. Since PostgreSQL doesn't seem to handle replication by itself, I would think about using a piede of middleware like PgPool-II to keep the servers in sync. But I'm afraid it would be nothing but necessary for this usage : very high read-to-write ratio, where all writes are done at the same exact time every day. My questions are simple : What would you do to keep these servers in sync? And, what is done for this at the OpenStreetMap Foundation, MapQuest, Mapbox or CloudMade? Thanks.

    Read the article

  • Django freezes when adding objects through the admin

    - by Quartz
    I have a Django 1.1 website running via Apache/mod_wsgi with a PostgreSQL 8.3.1 database. Recently, when I added objects through the admin interface, the connection froze up and I lost several worker processes, so I had to restart Apache. Upon trying to replicate this, I found that it only happens through the admin: if I go into the Django shell and issue the same insert, it works fine. Also, performing an UPDATE operation works without issues, so just INSERTs. I've rebuilt indexes on PostgreSQL and run a full VACUUM. Error logs don't show anything, and I can't figure out for the life of me what's wrong. Anyone have any ideas?

    Read the article

  • Why is domU faster than dom0 on IO?

    - by Paco
    I have installed debian 7 on a physical machine. This is the configuration of the machine: 3 hard drives using RAID 5 Strip element size: 1M Read policy: Adaptive read ahead Write policy: Write Through /boot 200 MB ext2 / 15 GB ext3 SWAP 10GB LVM rest (~500GB) emphasized text I installed postgresql, created a big database (over 1GB). I have an SQL request that takes a lot of time to run (a SELECT statement, so it only reads data from the database). This request takes approximately 5.5 seconds to run. Then, I installed XEN, created a domU, with another debian distro. On this OS, I also installed postgresql, with the same database. The same SQL request takes only 2.5 seconds to run. I checked the kernel on both dom0 and domU. uname-a returns "Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux" on both systems. I checked the kernel parameters, which are approximately the same. For those that are relevant, I changed their values to make them match on both systems using sysctl. I saw no changes (the requests still take the same amount of time). After this, I checked the file systems. I used ext3 on domU. Still no changes. I installed hdparm, and ran hdparm -Tt on both systems, on all my partitions on both systems, and I get similar results. Now, I am stuck, I don't know what is different, and what could be the cause of such a big difference. Additional Info: Debian runs on a Dell server PowerEdge 2950 postgresql: 9.1.9 (both dom0 and domU) xen-linux-system: 3.2.0 xen-hypervisor: 4.1 Thanks EDIT: As Krzysztof Ksiezyk suggested, it might be due to some file caching system. I ran the dd command to test both the read and write speed. Here is domU: root@test1:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 18.8289 s, 107 MB/s root@test1:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 15.0549 s, 134 MB/s And here is dom0: root@debian:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 8.87281 s, 191 MB/s root@debian:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 0.501509 s, 3.4 GB/s What can be the cause of this caching system? And how can we "fix" it? Can we apply it to dom0? EDIT 2: I switched my virtual disk type. To do so I followed this article. I did a dd if=/dev/vg0/test1-disk of=/mnt/test1-disk.img bs=16M Then in /etc/xen/test1.cfg, I changed the disk parameter to use file: instead of phy: it should have removed the file caching, but I still get the same numbers (domU being much faster for Postgres)

    Read the article

  • How do you handle the task of changing the schema of a production MySQL database?

    - by Continuation
    One of the biggest complaints I have heard about MySQL is that it locks up a table if you try to change its schema like adding a column or adding an index. By "locking up the table" does it mean I can neither read nor write to the table? Sometimes for hours? That seems a pretty severe limitations. I was going to use MySQL for my new project but this gives me pause. Is there a workaround for this? How do you handle the task of changing the schema of your production MySQL database? By the way someone told me Postgresql doesn't have this problem. Is that true - I can both read and write to a Postgresql table while changing its schema? Is there any performance penalty incurred? Would love to hear your experiences.

    Read the article

  • Linux freezes every few seconds

    - by Zeppomedio
    We're having an issue where one our Linux boxes (Ubuntu 10.04 LTS, running on EC2 with a quadruple-large size, 68GB of RAM and 8 virtual cores with 3.25GHz each) freezes up every few seconds. Typing in an ssh session will freeze, and running strace on one of the Postgresql processes that's running usually shows: 02:37:41.567990 semop(7831581, {{3, -1, 0}}, 1 for a few seconds before it proceeds (it always gets stuck at that semop). OProfile shows that most of the time is spent in the kernel (60%) versus 37% in Postgresql. The result of these halts (which began suddenly a day ago) is that load on the box has gone from 0.7 to 10+, and causes our entire stack to slow done. Any ideas on how to track down what's going on? iostat doesn't show the disks being particularly slow or overloaded, and top shows user cpu % spike from 8% to about 40% whenever these back-ups happen.

    Read the article

  • Postgres Remote Access

    - by boot-baby-boot
    I am trying to connect to postgres remotely.I have followed this tutorial http://www.cyberciti.biz/faq/howto-fedora-linux-install-postgresql-server/ and have executed the following commands to see if the remote access is possible. [root@printmyworld ~]# egrep -i "(listen_addresses|port|tcpip_socket).*=.+" /var /lib/pgsql/data/postgresql.conf #listen_addresses = '*' # what IP address(es) to listen on; #port = 5432 [root@printmyworld ~]# lsof +c0 -anPiTCP -upostgres COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME postmaster 9323 postgres 3u IPv4 2875987353 TCP 127.0.0.1:5432 (LISTEN ) postmaster 9323 postgres 4u IPv6 2875987354 TCP [::1]:5432 (LISTEN) I am suspicious of this line: postmaster 9323 postgres 3u IPv4 2875987353 TCP 127.0.0.1:5432 (LISTEN My server ip address is 1yy.000.1xx.000 .Should it be 1yy.000.1xx.000:5432

    Read the article

  • Sphinx 2.0.8 with Postgresql 9.2.4

    - by Calvin
    I want to install Sphinx 2.0.8 from source on CentOS 5.6 with PostgreSQL 9.2.4 my server type : Linux localhost.localdomain 2.6.18-348.6.1.el5 #1 SMP Tue May 21 15:29:55 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux. First, i compile with : ./configure --prefix=/usr/local/sphinx --with-pgsql --without-mysql --with-pgsql-libs=/var/lib/pgsql/9.2/data/ --with-pgsql-includes=/usr/pgsql-9.2/include/ then it seems work, but after i run make errors appeared /usr/bin/ld: cannot find -lpq I have installed postgresql92-devel and libs also libpqxxx and worked with that error all day long but i am not solve that yet. Thanks for helping me.

    Read the article

  • How to make NSIS create a file in an %APPDATA% of another user?

    - by SCO
    I wrote an NSIS installer script for postgresql 9.1. The installer works properly, but after the reboot, the service is not started (right after the install, the database started properly though) I guess this is because the postgres service user has no pgpass.conf file in its %APPDATA%. As far as I understand my install script, the pgpass.conf file is added to the %APPDATA% of the user running the installer (an administrator account in my case). This will not help. I tried the following, to add the pgpass.conf to all users, bt I guess this adds it to a kind of wildcard, not to the %APPDATA% of each user : SetShellVarContext "all" SetOutPath "$APPDATA\postgresql" File config\pgpass.conf SetShellVarContext "current" SetOutPath "$APPDATA\postgresql" File config\pgpass.conf I couldn't find the macro name for c:/Users/postgres in the documentation. This could be a way to achieve it. But with WindowsXP, 7, I wish I need a portable way to address the /Users directory. I wish I could use something like SetShellVarContext "postgres", and then have NSIS write the pgpass.conf file in c:\Users\postgres\AppData\postgresql. Is there a way to do this ? Thnk you !

    Read the article

  • Domain validates but won't save

    - by marko
    I have the following setup. Class, say, Car that has a CarPart (belongsTo=[car:Car]). When I'm creating a Car I also want to create som default CarParts, so I do def car = new Car(bla bla bla) def part = new CarPart(car:car) Now, when I do car.validate() or part.validate() it seems fine. But when I do if(car.save && part.save() I get this exception: 2012-03-24 14:02:21,943 [http-8080-4] ERROR util.JDBCExceptionReporter - Batch entry 0 insert into car_part (version, car_id, id) values ('0', '297', '298') was aborted. Call getNextException to see the cause. 2012-03-24 14:02:21,943 [http-8080-4] ERROR util.JDBCExceptionReporter - ERROR: value too long for type character varying(6) 2012-03-24 14:02:21,943 [http-8080-4] ERROR events.PatchedDefaultFlushEventListener - Could not synchronize database state with session org.hibernate.exception.DataException: Could not execute JDBC batch update Stacktrace follows: java.sql.BatchUpdateException: Batch entry 0 insert into car_part (version, deal_id, id) values ('0', '297', '298') was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2621) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1837) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:407) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2754) at $Proxy20.flush(Unknown Source) at ristretto.DealController$_closure5.doCall(DealController.groovy:109) at ristretto.DealController$_closure5.doCall(DealController.groovy) at java.lang.Thread.run(Thread.java:722) Any ideas?

    Read the article

  • Security benefits from a second opinion, are there flaws in my plan to hash & salt user passwords vi

    - by Tchalvak
    Here is my plan, and goals: Overall Goals: Security with a certain amount of simplicity & database-to-database transferrability, 'cause I'm no expert and could mess it up and I don't want to have to ask a lot of users to reset their passwords. Easy to wipe the passwords for publishing a "wiped" databased of test data. (e.g. I'd like to be able to use a postgresql statement to simply reset all passwords to something simple so that testers can use that testing data for themselves). Plan: Hashing the passwords Account creation records the original email that an account is created with, forever. A global salt is used, e.g. "90fb16b6901dfceb73781ba4d8585f0503ac9391". An account specific salt, the original email the account was created with, is used, e.g. "[email protected]". The users's password is used, e.g. "password123" (I'll be warning against weak passwords in the signup form) The combination of the global salt, account specific salt, and password is hashed via some hashing method in postgresql (haven't been able to find documentation for hashing functions in postgresql, but being able to use sha-2 or something like that would be nice if I could find it). The hash gets stored in the database. Recovering an account To change their password, they have to go through standard password reset (and that reset email gets sent to the original email as well as the most recent account email that they have set). Flaws? Are there any flaws with this that I need to address? And are there best practices to doing hashing fully within postgresql?

    Read the article

  • Installing a .deb file manually?

    - by stef
    apt-get install gitosis --fix-missing on my Linode still leads to a 404 (Failed to fetch http://ftp.debian.org/debian/pool/main/g/gitosis/gitosis_0.2+20080825-2_all.deb 404 Not Found [IP: 130.89.148.12 80] ) . The correct file location seems to be http://ftp.debian.org/debian/pool/main/g/gitosis/gitosis_0.2+20090917-11_all.deb Is there any way I can install this without apt-get, or point apt-get in the right direction somehow? Several other packages on my Debian Linode also point to 404, both from command line and virtualmin. EDIT: Machine details Debian 5.0 64bit (Latest 2.6 (2.6.39.1-x86_64-linode19)) EDIT2 My sources list # main repo deb http://ftp.debian.org/debian/ lenny main contrib non-free deb-src http://ftp.debian.org/debian/ lenny main contrib non-free deb http://security.debian.org/ lenny/updates main contrib non-free deb-src http://security.debian.org/ lenny/updates main contrib non-free deb http://volatile.debian.org/debian-volatile lenny/volatile main contrib non-free deb-src http://volatile.debian.org/debian-volatile lenny/volatile main contrib non-free # contrib & non-free repos #deb http://ftp.debian.org/debian/ lenny contrib non-free #deb-src http://ftp.debian.org/debian/ lenny contrib non-free #deb http://security.debian.org/debian/ lenny/updates contrib non-free #deb-src http://security.debian.org/debian/ lenny/updates contrib non-free deb http://software.virtualmin.com/gpl/debian/ virtualmin-lenny main deb http://software.virtualmin.com/gpl/debian/ virtualmin-universal main

    Read the article

  • Bad value for type timestamp on production server

    - by Juan Javaloyes
    I'm working with: seam 2.2.2 + hibernate + richfaces + jboss 5.1 + postgres I have an module which needs to load some data from the database. Easy. The problem is, on development it works fine, 100%, but when I deploy on my production server and try to get the data, an error rise: could not read column value from result set: fechahor9_504_; Bad value for type timestamp : [C@122e5cf SQL Error: 0, SQLState: 22007 Bad value for type timestamp : [C@122e5cf javax.persistence.PersistenceException: org.hibernate.exception.DataException: could not execute query [more errors] Caused by: org.postgresql.util.PSQLException: Bad value for type timestamp : [C@122e5cf at org.postgresql.jdbc2.TimestampUtils.loadCalendar(TimestampUtils.java:232) [more errors] Caused by: java.lang.NumberFormatException: Trailing junk on timestamp: '' at org.postgresql.jdbc2.TimestampUtils.loadCalendar(TimestampUtils.java:226) I can't understand why it works on my machine (development) and why not on production. Any clues? Anyone gone through the same problem? Is exactly the same compilation

    Read the article

  • Deadlock error in INSERT statement

    - by Gnanam
    We've got a web-based application. There is a time-bound database operation (INSERTs and UPDATEs) in the application which takes more time to complete, hence this particular flow has been changed into Java Thread so it will not wait (block) for the complete database operation to be completed. My problem is, if more than 1 user come across this particular flow, I'm facing the following error thrown by PostgreSQL: org.postgresql.util.PSQLException: ERROR: deadlock detected Detail: Process 13560 waits for ShareLock on transaction 3147316424; blocked by process 13566. Process 13566 waits for ShareLock on transaction 3147316408; blocked by process 13560. Above error is consistently thrown in INSERT statements. Additional Information: 1) I have PRIMARY KEY defined in this table. 2) There are FOREIGN KEY references in this table. 3) Separate database connection is passed to each Java Thread. Technologies Web Server: Tomcat v6.0.10 Java v1.6.0 Servlet Database: PostgreSQL v8.2.3 Connection Management: pgpool II

    Read the article

  • Are there any issues with MySQL's i18n(indic language) support ?

    - by anjanb
    Hi All, We're evaluating MySQL and PostgreSQL for building our indic language web application which will use MySQL or PostgreSQL. One of my colleagues mentioned that MySQL had issues with i18n. I mostly come from the Oracle world and although I've played a lil with MySQL, I don't know enough to know that there are issues with its i18n support. Does anyone know issues with MySQL's i18n support and if PostgreSQL would be better placed for building an application with indic language support(kannada, telugu, tamil, etc) ? Just so you know, we're going to be using J2EE to build this application and we will be using JDBC drivers to access the DB. P.S : Will anything change if we were to use Rails to build the app instead of J2EE ? Thank you,

    Read the article

  • Why does SQLAlchemy with psycopg2 use_native_unicode have poor performance?

    - by Bob Dover
    I'm having a difficult time figuring out why a simple SELECT query is taking such a long time with sqlalchemy using raw SQL (I'm getting 14600 rows/sec, but when running the same query through psycopg2 without sqlalchemy, I'm getting 38421 rows/sec). After some poking around, I realized that toggling sqlalchemy's use_native_unicode parameter in the create_engine call actually makes a huge difference. This query takes 0.5secs to retrieve 7300 rows: from sqlalchemy import create_engine engine = create_engine("postgresql+psycopg2://localhost...", use_native_unicode=True) r = engine.execute("SELECT * FROM logtable") fetched_results = r.fetchall() This query takes 0.19secs to retrieve the same 7300 rows: engine = create_engine("postgresql+psycopg2://localhost...", use_native_unicode=False) r = engine.execute("SELECT * FROM logtable") fetched_results = r.fetchall() The only difference between the 2 queries is use_native_unicode. But sqlalchemy's own docs state that it is better to keep use_native_unicode=True (http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html). Does anyone know why use_native_unicode is making such a big performance difference? And what are the ramifications of turning off use_native_unicode?

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • Cannot start Postgres daemon after installing with Yum

    - by Sean the Bean
    I was trying to install Postgres 9.1.4 on Fedora 17 using Yum. If I do: sudo yum install postgres-libs sudo yum install postgres sudo yum install postgis All the installs appear to complete successfully (i.e., no errors), but I cannot start the Postgres daemon using: service postgresql initdb Like the official Postgres download guide says to do (http://www.postgresql.org/download/linux/redhat/). The error says Unknown operation initdb. RPM tells me that it installed psql to /usr/bin/, which I confirmed. It turns out that only a few components installed correctly (psql, pg_dump, pg_configure, and a few others), but most are missing (e.g., pg_ctl and postgres). I've tried several different configurations and had several of my coworkers (with more linux experience than me) look at it, but so far nothing has worked. Two of them have also run into similar issues installing Postgres using apt-get on Ubuntu, which makes me think the rpm isn't doing its job. It seems the only solution to build it from source, which is more robust anyway, but of course it takes longer. I'm wondering, though, if anyone else has run into this issue and/or has successfully installed Postgres on either Fedora or Ubuntu using a package manager like yum or apt-get? Is the rpm broken?

    Read the article

  • Cannot login to Postgrest database despite setting password for user 'postgres'

    - by Serg
    I'm trying to use pgAdmin III to manage my Postgres database. Here are the commands I've run on my machine: sudo apt-get install postgresql Then I installed the pgAdmin III application: sudo apt-get install pgadmin3 Next I focused on setting my username and password in order to login: sudo -u postgres psql postgres Here I set my password \password postgres Finally I just created my database: sudo -u postgres createdb repairsdatabase When I try to login using pgAdmin III, I get the error: An error has occurred: Error connecting to the server: FATAL: Peer authentication failed for user "postgres"

    Read the article

  • "No version information available" - After installing Postgres

    - by intellidiot
    I have installed Postgres 9.1.4 on an Ubuntu 12.04 (precise) 64-bit from here http://www.openscg.com/se/postgresql/packages.jsp, but right after installing many commands (programs) are throwing these following warnings in different combinations: /opt/postgres/9.1/lib/libxml2.so.2: no version information available /opt/postgres/9.1/lib/libcrypto.so.1.0.0: no version information available /opt/postgres/9.1/lib/libssl.so.1.0.0: no version information available Though this is not restricting anything, this is often getting very annoying. Is there a way to get rid of this without uninstalling Postgres?

    Read the article

  • decodeURIComponent for postgres

    - by maletin
    I use decodeURIComponent and encodeURIComponent in Javascript. Before I store this data in a UTF-8-PostgreSQL-Database, I should decode them: $my_data = pg_escape_string(utf8_encode($_POST['my_data'])); I'm looking for a PostgreSQL-Function to convert Javascript-Encoded Data.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >