Search Results

Search found 1505 results on 61 pages for 'postgresql 9 3'.

Page 13/61 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • How to write my own global lock / unlock functions for PostgreSQL

    - by rafalmag
    I have postgresql (in perlu) function getTravelTime(integer, timestamp), which tries to select data for specified ID and timestamp. If there are no data or if data is old, it downloads them from external server (downloading time ~300ms). Multiple process use this database and this function. There is an error when two process do not find data and download them and try to do an insert to travel_time table (id and timestamp pair have to be unique). I thought about locks. Locking whole table would block all processes and allow only one to proceed. I need to lock only on id and timestamp. pg_advisory_lock seems to lock only in "current session". But my processes uses their own sessions. I tried to write my own lock/unlock functions. Am I doing it right? I use active waiting, how can I omit this? Maybe there is a way to use pg_advisory_lock() as global lock? My code: CREATE TABLE travel_time_locks ( id_key integer NOT NULL, time_key timestamp without time zone NOT NULL, UNIQUE (id_key, time_key) ); ------------ -- Function: mylock(integer, timestamp) DROP FUNCTION IF EXISTS mylock(integer, timestamp) CASCADE; -- Usage: SELECT mylock(1, '2010-03-28T19:45'); -- function tries to do a global lock similar to pg_advisory_lock(key, key) CREATE OR REPLACE FUNCTION mylock(id_input integer, time_input timestamp) RETURNS void AS $BODY$ DECLARE rows int; BEGIN LOOP BEGIN -- active waiting here !!!! :( INSERT INTO travel_time_locks (id_key, time_key) VALUES (id_input, time_input); EXCEPTION WHEN unique_violation THEN CONTINUE; END; EXIT; END LOOP; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 1; ------------ -- Function: myunlock(integer, timestamp) DROP FUNCTION IF EXISTS myunlock(integer, timestamp) CASCADE; -- Usage: SELECT myunlock(1, '2010-03-28T19:45'); -- function tries to do a global unlock similar to pg_advisory_unlock(key, key) CREATE OR REPLACE FUNCTION myunlock(id_input integer, time_input timestamp) RETURNS integer AS $BODY$ DECLARE BEGIN DELETE FROM ONLY travel_time_locks WHERE id_key=id_input AND time_key=time_input; RETURN 1; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 1;

    Read the article

  • C# asp.net EF MVC postgresql error 23505: Duplicate key violates unique constraint

    - by user2721755
    EDIT: It was issue with database table - dropping and recreating table id column did the work. Problem solved. I'm trying to build web application, that is connected to postgresql database. Results are displaying in view with Kendo UI. When I'm trying to add new row (with Kendo UI 'Add new record' button), I get error 23505: 'Duplicate key violates unique constraint'. My guess is, that EF takes id to insert from the beginning, not the last one, because after 35 (it's number of rows in table) tries - and errors - adding works perfectly. Can someone help me to understand, what's wrong? Model: using System.ComponentModel.DataAnnotations; using System.ComponentModel.DataAnnotations.Schema; namespace MainConfigTest.Models { [Table("mainconfig", Schema = "public")] public class Mainconfig { [Column("id")] [Key] [Editable(false)] public int Id { get; set; } [Column("descr")] [Editable(true)] public string Descr { get; set; } [Column("hibversion")] [Required] [Editable(true)] public long Hibversion { get; set; } [Column("mckey")] [Required] [Editable(true)] public string Mckey { get; set; } [Column("valuexml")] [Editable(true)] public string Valuexml { get; set; } [Column("mcvalue")] [Editable(true)] public string Mcvalue { get; set; } } } Context: using System.Data.Entity; namespace MainConfigTest.Models { public class MainConfigContext : DbContext { public DbSet<Mainconfig> Mainconfig { get; set; } } } Controller: namespace MainConfigTest.Controllers { public class MainConfigController : Controller { #region Properties private Models.MainConfigContext db = new Models.MainConfigContext(); private string mainTitle = "Mainconfig (Kendo UI)"; #endregion #region Initialization public MainConfigController() { ViewBag.MainTitle = mainTitle; } #endregion #region Ajax [HttpGet] public JsonResult GetMainconfig() { int take = HttpContext.Request["take"] == null ? 5 : Convert.ToInt32(HttpContext.Request["take"]); int skip = HttpContext.Request["skip"] == null ? 0 : Convert.ToInt32(HttpContext.Request["skip"]); Array data = (from Models.Mainconfig c in db.Mainconfig select c).OrderBy(c => c.Id).ToArray().Skip(skip).Take(take).ToArray(); return Json(new Models.MainconfigResponse(data, db.Mainconfig.Count()), JsonRequestBehavior.AllowGet); } [HttpPost] public JsonResult Create() { try { Mainconfig itemToAdd = new Mainconfig() { Descr = Convert.ToString(HttpContext.Request["Descr"]), Hibversion = Convert.ToInt64(HttpContext.Request["Hibversion"]), Mckey = Convert.ToString(HttpContext.Request["Mckey"]), Valuexml = Convert.ToString(HttpContext.Request["Valuexml"]), Mcvalue = Convert.ToString(HttpContext.Request["Mcvalue"]) }; db.Mainconfig.Add(itemToAdd); db.SaveChanges(); return Json(new { Success = true }); } catch (InvalidOperationException ex) { return Json(new { Success = false, msg = ex }); } } //other methods } } Kendo UI script in view: <script type="text/javascript"> $(document).ready(function () { $("#config-grid").kendoGrid({ sortable: true, pageable: true, scrollable: false, toolbar: ["create"], editable: { mode: "popup" }, dataSource: { pageSize: 5, serverPaging: true, transport: { read: { url: '@Url.Action("GetMainconfig")', dataType: "json" }, update: { url: '@Url.Action("Update")', type: "Post", dataType: "json", complete: function (e) { $("#config-grid").data("kendoGrid").dataSource.read(); } }, destroy: { url: '@Url.Action("Delete")', type: "Post", dataType: "json" }, create: { url: '@Url.Action("Create")', type: "Post", dataType: "json", complete: function (e) { $("#config-grid").data("kendoGrid").dataSource.read(); } }, }, error: function (e) { if(e.Success == false) { this.cancelChanges(); } }, schema: { data: "Data", total: "Total", model: { id: "Id", fields: { Id: { editable: false, nullable: true }, Descr: { type: "string"}, Hibversion: { type: "number", validation: {required: true,}, }, Mckey: { type: "string", validation: { required: true, }, }, Valuexml:{ type: "string"}, Mcvalue: { type: "string" } } } } }, //end DataSource // generate columns etc. Mainconfig table structure: id serial NOT NULL, descr character varying(200), hibversion bigint NOT NULL, mckey character varying(100) NOT NULL, valuexml character varying(8000), mcvalue character varying(200), CONSTRAINT mainconfig_pkey PRIMARY KEY (id), CONSTRAINT mainconfig_mckey_key UNIQUE (mckey) Any help will be appreciated.

    Read the article

  • Errors with parameter datatype in PostgreSql query

    - by John
    Im trying to execute a query to postgresql using the following code. It's written in C/C++ and I keep getting the following error when declaring a cursor: DECLARE CURSOR failed: ERROR: could not determine data type of parameter $1 Searching on here and on google, I can't find a solution. Can anyone find where I have made and error and why this is happening? thanks! void searchdb( PGconn *conn, char* name, char* offset ) { // Will hold the number of field in table int nFields; // Start a transaction block PGresult *res = PQexec(conn, "BEGIN"); if (PQresultStatus(res) != PGRES_COMMAND_OK) { printf("BEGIN command failed: %s", PQerrorMessage(conn)); PQclear(res); exit_nicely(conn); } // Clear result PQclear(res); printf("BEGIN command - OK\n"); //set the values to use const char *values[3] = {(char*)name, (char*)RESULTS_LIMIT, (char*)offset}; //calculate the lengths of each of the values int lengths[3] = {strlen((char*)name), sizeof(RESULTS_LIMIT), sizeof(offset)}; //state which parameters are binary int binary[3] = {0, 0, 1}; res = PQexecParams(conn, "DECLARE emprec CURSOR for SELECT name, id, 'Events' as source FROM events_basic WHERE name LIKE '$1::varchar%' UNION ALL " " SELECT name, fsq_id, 'Venues' as source FROM venues_cache WHERE name LIKE '$1::varchar%' UNION ALL " " SELECT name, geo_id, 'Cities' as source FROM static_cities WHERE name LIKE '$1::varchar%' OR FIND_IN_SET('$1::varchar%', alternate_names) != 0 LIMIT $2::int4 OFFSET $3::int4", 3, //number of parameters NULL, //ignore the Oid field values, //values to substitute $1 and $2 lengths, //the lengths, in bytes, of each of the parameter values binary, //whether the values are binary or not 0); //we want the result in text format // Fetch rows from table if (PQresultStatus(res) != PGRES_COMMAND_OK) { printf("DECLARE CURSOR failed: %s", PQerrorMessage(conn)); PQclear(res); exit_nicely(conn); } // Clear result PQclear(res); res = PQexec(conn, "FETCH ALL in emprec"); if (PQresultStatus(res) != PGRES_TUPLES_OK) { printf("FETCH ALL failed"); PQclear(res); exit_nicely(conn); } // Get the field name nFields = PQnfields(res); // Prepare the header with table field name printf("\nFetch record:"); printf("\n********************************************************************\n"); for (int i = 0; i < nFields; i++) printf("%-30s", PQfname(res, i)); printf("\n********************************************************************\n"); // Next, print out the record for each row for (int i = 0; i < PQntuples(res); i++) { for (int j = 0; j < nFields; j++) printf("%-30s", PQgetvalue(res, i, j)); printf("\n"); } PQclear(res); // Close the emprec res = PQexec(conn, "CLOSE emprec"); PQclear(res); // End the transaction res = PQexec(conn, "END"); // Clear result PQclear(res); }

    Read the article

  • Why can't I unblock postgres with shorewall?

    - by ryeguy
    I can't seem to unblock the port needed for postgres using Shorewall. I am developing a PHP app on my windows machine here, and then I upload it on my linux box to actually use it. The linux box runs the php files as well as hosts the db server. Since I need it working from both machines, in my PHP code I am referring to the database as the full IP instead of localhost. I can easily connect to postgres from my windows machine, but ironically, my PHP app can't connect to postgres even though it's on the same box. Here's what I have in /etc/shorewall/rules: #macro/action src dest PostgreSQL/ACCEPT net $FW PostgreSQL/ACCEPT loc $FW PostgreSQL/ACCEPT loc dmz PostgreSQL/ACCEPT net dmz PostgreSQL/ACCEPT loc net PostgreSQL/ACCEPT dmz $FW PostgreSQL/ACCEPT dmz loc PostgreSQL/ACCEPT dmz net PostgreSQL/ACCEPT dmz dmz Clearly I have a ton of crap there. The first line is all I needed to make it allow a connection from my windows machine. All the lines after it are me just trying everything to get it to work. What am I missing?

    Read the article

  • How do I fix issue causing "incomplete startup packet" log message trying to implement replication in Postgresql?

    - by colour me brad
    I've got two cloud servers running Ubuntu 13.04 and PostgreSQL 9.2. I've primarily used this blog post to aid me in setting things up. However, to do the initial database dump to the slave I'm using pg_start_backup/pg_stop_backup strategy used in this other blog post. I've read through the docs and postgres wikis as well. I ran into several problems I was able to solve, but I can't get past this wretched "the database is starting up" failure. I'm not sure if seeing "cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory" after "consistent recover state reached" is normal or the first sign of a problem. The searching I've done on "the database is starting up" and "incomplete startup packet" tells me that something is sending empty TCP packets to the slave. The only thing that even knows about the slave is the master, so I'm not sure why it's sending empty packets... Has anyone worked with this and have an idea what might be going wrong? The postgres log on the slave looks like so: 2013-08-26 13:01:38 CDT LOG: entering standby mode 2013-08-26 13:01:38 CDT LOG: restored log file "000000010000000000000039" from archive 2013-08-26 13:01:38 CDT LOG: incomplete startup packet 2013-08-26 13:01:39 CDT LOG: redo starts at 0/39000020 2013-08-26 13:01:39 CDT LOG: consistent recovery state reached at 0/390000E0 cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory 2013-08-26 13:01:39 CDT LOG: streaming replication successfully connected to primary 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:41 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT LOG: incomplete startup packet 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up thanks! brad

    Read the article

  • PostgreSQL pg_hba.conf with "password" auth wouldn't work with PHP pg_connect?

    - by tftd
    I've recently experimented with the settings in pg_hba.conf. I read the PostgreSQL documentation and I though that the "password" auth method is what I want. There are many people that have access to the server PostgreSQL is working on so I don't want the "trust" method. So I changed it. But then PHP stopped working with the database. The message I get is "Warning: pg_connect(): Unable to connect to PostgreSQL server: FATAL: password authentication failed for user "myuser" in /my/path/to/connection/class.php on line 35". It is kind of strange because I can connect via phppgadmin without any problems and also I can connect from my home computer with psql - again without any problems. This is my pg_hba.conf: # TYPE DATABASE USER CIDR-ADDRESS METHOD # "local" is for Unix domain socket connections only local all all password # IPv4 local connections: host all all 127.0.0.1/32 password # IPv6 local connections: host all all ::1/128 password The connection string I'm using with pg_conenct is: $connect_string = "host=localhost port=5432 dbname=mydbname user=auser password=apassword"; $dbConnection = pg_connect($connection_string); Does anybody know why is this happening ? Did I misconfigured something ?

    Read the article

  • Storing info in a PostgreSQl database issue

    - by MrEnder
    Ok I am making a registry for my website. First page asks for some personal info if($error==false) { $query = pg_query("INSERT INTO chatterlogins(firstName, lastName, gender, password, ageMonth, ageDay, ageYear, email, createDate) VALUES('$firstNameSignup', '$lastNameSignup', '$genderSignup', md5('$passwordSignup'), $monthSignup, $daySignup, $yearSignup, '$emailSignup', now());"); $query = pg_query("INSERT INTO chatterprofileinfo(email, lastLogin) VALUES('$emailSignup', now());"); $userNameSet = $emailSignup; $_SESSION['$userNameSet'] = $userNameSet; header('Location: signup_step2.php'.$rdruri); } The first query works. The second query works but doesn't save the email... the session doesn't work but the header works and sends me to the next page I get no errors even if I comment out header next page @session_start(); $conn = pg_connect("host=localhost dbname=brittains_db user=brittains password=XXXX" ); $signinCheck = false; $checkForm = ""; if(isset($_SESSION['$userName'])) { $userName = $_SESSION['$userName']; $signinCheck = true; $query = pg_query("UPDATE chatterprofileinfo SET lastLogin='now()' WHERE email='$userName'"); } if(isset($_SESSION['$userNameSet'])) { $userName = $_SESSION['$userNameSet']; $signinCheck = true; $query = pg_query("UPDATE chatterprofileinfo SET lastLogin='now()' WHERE email='$userName'"); } This is the top starting the session depending on if your logged in or not. then if I enter in the info here and put it through this if($error==false) { $query = pg_query("UPDATE chatterprofileinfo SET aboutSelf='$aboutSelf', hobbies='$hobbies', music='$music', tv='$tv', sports='$sports', lastLogin='now()' WHERE email='$userName'") or exit(pg_last_error()); //header('Location: signup_step3.php'.$rdruri); } nothing shows up for on my database from this. I have no idea where I went wrong the website is http://opentech.durhamcollege.ca/~intn2201/brittains/chatter/

    Read the article

  • Date query with Hibernate on Timestamp Column in PostgreSQL

    - by Shashikant Kore
    A table has timestamp column. A sample value in that could be 2010-03-30 13:42:42. With Hibernate, I am doing a range query Restrictions.between("column-name", fromDate, toDate). The Hibernate mapping for this column is as follows. <property name="orderTimestamp" column="order_timestamp" type="java.util.Date" /> Let's say, I want to find out all the records that have the date 30th March 2010 and 31st March 2010. A range query on this field is done as follows. Date fromDate = new SimpleDateFormat("yyyy-MM-dd").parse("2010-03-30"); Date toDate = new SimpleDateFormat("yyyy-MM-dd").parse("2008-03-31"); Expression.between("orderTimestamp", fromDate, toDate); This doesn't work. The query is converted to respective timestamps as "2010-03-30 00:00:00" and "2010-03-31 00:00:00". So, all the records for the 31st March 2010 are ignored. A simple solution to this problem could be to have the end date as "2010-03-31 23:59:59." But, I would like to know if there is way to match only the date part of the timestamp column. Also, is Expression.between() inclusive of both limits? Documentation doesn't throw any light on this.

    Read the article

  • Installing psycopg2 (postgresql) in virtualenv on windows

    - by StackUnderflow
    I installed psycopg2 in virtualenv using easy_install psycopg2. I did not see any errors and looks like installation went fine.. there is an egg file created in the site-packages dir for psycopg2.. but when I run import psycopg2 in the interpreter, I am getting following error.. any clue? How can I fix it.. any other way to install psycopg2 in virtualenv.. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "build\bdist.win32\egg\psycopg2\__init__.py", line 69, in <module> File "build\bdist.win32\egg\psycopg2\_psycopg.py", line 7, in <module> File "build\bdist.win32\egg\psycopg2\_psycopg.py", line 6, in __bootstrap__ Thanks.

    Read the article

  • Can't get heroku work with rails 3.x postgresql

    - by framomo86
    I followed Heroku official guides to push rails project to heroku. The application.rb file is ok, I added pg gem and database.yml in the right way. When I push to heroku I get: -----> Preparing app for Rails asset pipeline Detected manifest.yml, assuming assets were compiled locally But when I open heroku via heroku open I get an error. I put heroku logs and get this. Started GET "/" for 93.45.227.255 at 2012-10-11 13:28:04 +0000 2012-10-11T13:28:04+00:00 app[web.1]: Processing by ProductsController#index as HTML 2012-10-11T13:28:04+00:00 app[web.1]: : SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a.attnotnull 2012-10-11T13:28:04+00:00 app[web.1]: ActiveRecord::StatementInvalid (PG::Error: ERROR: relation "products" does not exist 2012-10-11T13:28:04+00:00 app[web.1]: LINE 4: WHERE a.attrelid = '"products"'::regclass 2012-10-11T13:28:04+00:00 heroku[router]: GET gift4.herokuapp.com/ dyno=web.1 queue=0 wait=0ms service=203ms status=500 bytes=643 2012-10-11T13:28:04+00:00 app[web.1]: ): 2012-10-11T13:28:04+00:00 app[web.1]: ON a.attrelid = d.adrelid AND a.attnum = d.adnum 2012-10-11T13:28:04+00:00 app[web.1]: 2012-10-11T13:28:04+00:00 app[web.1]: 2012-10-11T13:28:04+00:00 app[web.1]: ^ 2012-10-11T13:28:04+00:00 app[web.1]: 2012-10-11T13:28:04+00:00 app[web.1]: Completed 500 Internal Server Error in 72ms 2012-10-11T13:28:04+00:00 app[web.1]: FROM pg_attribute a LEFT JOIN pg_attrdef d 2012-10-11T13:28:04+00:00 app[web.1]: WHERE a.attrelid = '"products"'::regclass 2012-10-11T13:28:04+00:00 app[web.1]: AND a.attnum > 0 AND NOT a.attisdropped 2012-10-11T13:28:04+00:00 app[web.1]: ORDER BY a.attnum 2012-10-11T13:28:04+00:00 app[web.1]: app/controllers/products_controller.rb:5:in `index' 2012-10-11T13:28:04+00:00 heroku[router]: GET gift4.herokuapp.com/favicon.ico d So I tried heroku run rake db:reset And get this Heroku client internal error. ! Search for help at: https://help.heroku.com ! Or report a bug at: https://github.com/heroku/heroku/issues/new Error: Operation timed out - connect(2) (Errno::ETIMEDOUT) Backtrace: /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/client/rendezvous.rb:39:in `initialize' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/client/rendezvous.rb:39:in `open' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/client/rendezvous.rb:39:in `block in start' /Users/francescochecco/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/timeout.rb:68:in `timeout' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/client/rendezvous.rb:31:in `start' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/command/run.rb:125:in `rendezvous_session' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/command/run.rb:112:in `run_attached' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/command/run.rb:21:in `index' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/command.rb:206:in `run' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/cli.rb:28:in `start' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/bin/heroku:16:in `<top (required)>' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/bin/heroku:19:in `load' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/bin/heroku:19:in `<main>' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/bin/ruby_noexec_wrapper:14:in `eval' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/bin/ruby_noexec_wrapper:14:in `<main>' Command: heroku run rake db:reset Version: heroku-gem/2.32.6 (x86_64-darwin11.3.0) ruby/1.9.3 autoupdate I tried everything. Anyone could help?

    Read the article

  • Increase Query Speed in PostgreSQL

    - by Anthoni Gardner
    Hello, First time posting here, but an avid reader. I am experiancing slow query times on my database (all tested locally thus far) and not sure how to go about it. The database itself has 44 tables and some of them tables have over 1 Million records (mainly the movies, actresses and actors tables). The table is made via JMDB using the flat files on IMDB. Also the SQL query that I am about to show is from that said program (that too experiances very slow search times). I have tried to include as much information as I can, such as the explain plan etc. "QUERY PLAN" "HashAggregate (cost=46492.52..46493.50 rows=98 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Append (cost=39094.17..46491.79 rows=98 width=46)" " - HashAggregate (cost=39094.17..39094.87 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on movies (cost=0.00..39093.65 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Nested Loop (cost=0.00..7395.94 rows=28 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on akatitles (cost=0.00..7159.24 rows=28 width=4)" " Output: akatitles.movieid, akatitles.language, akatitles.title, " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Index Scan using movies_pkey on movies (cost=0.00..8.44 rows=1 width=46)" " Output: public.movies.movieid, public.movies.title, public.movies.year, public.movies.imdbid" " Index Cond: (public.movies.movieid = akatitles.movieid)" SELECT * FROM ((SELECT DISTINCT title, movieid, year FROM movies WHERE title ILIKE '%Babe%' AND NOT (title ILIKE '"%}')) UNION (SELECT movies.title, movies.movieid, movies.year FROM movies INNER JOIN akatitles ON movies.movieid=akatitles.movieid WHERE akatitles.title ILIKE '%Babe%' AND NOT (akatitles.title ILIKE '"%}'))) AS union_tmp2; Returns 612 Rows in 9078ms Database backup (plain text) is 1.61GB It's a really complex query and I am not fully cognizant on it, like I said it was spat out by JMDB. Do you have any suggestions on how I can increase the speed ? Regards Anthoni

    Read the article

  • PostGreSQL - pgloader installation?

    - by KittyYoung
    Granted... this is a dumb question, but it's still a mystery to someone like me, whose never done it before... I'm trying to install pgloader, but I can't seem to find any documentation.... I'm running MAMP on MAC OS X. I've already installed the tcllib, and am about to do: wget http://pgfoundry.org/frs/download.php/233/pgloader-1.0.tar.gz tar zxvf pgloader-1.0.tar.gz I'm wondering what directory I need to actually untar pgloader into? Is there anything else that I need to do to get it to work?

    Read the article

  • Postgresql GROUP_CONCAT equivalent?

    - by KnockKnockWhosThere
    I have a table and I'd like to pull one row per id with field values concatenated... In my table, for example, I have this: TM67 | 4 | 32556 TM67 | 9 | 98200 TM67 | 72 | 22300 TM99 | 2 | 23009 TM99 | 3 | 11200 And, I'd like to output: TM67| 4,9,72 | 32556,98200,22300 TM99 | 2,3 | 23009,11200 In MySQL, I was able to use GROUP_CONCAT, but that doesn't seem to work here... Is there an equivalent or another way to accomplish this?

    Read the article

  • Postgresql: keep 2 sequences synchronized

    - by Giovanni Di Milia
    Is there a way to keep 2 sequences synchronized in Postgres? I mean if I have: table_A_id_seq = 1 table_B_id_seq = 1 if I execute SELECT nextval('table_A_id_seq'::regclass) I want that table_B_id_seq takes the same value of table_A_id_seq and obviously it must be the same on the other side. I need 2 different sequences because I have to hack some constraints I have in Django (and that I cannot solve there).

    Read the article

  • Problems with $libdir on PostgreSQL

    - by Joe Germuska
    In short, my question is "why doesn't $libdir work on my PSQL installation." CREATE FUNCTION st_box2d_in(cstring) RETURNS box2d AS '$libdir/liblwgeom', 'BOX2DFLOAT4_in' LANGUAGE c IMMUTABLE STRICT; yields an error could not access file "$libdir/liblwgeom": No such file or directory while CREATE FUNCTION st_box2d_in(cstring) RETURNS box2d AS '/usr/local/pgsql/lib/liblwgeom', 'BOX2DFLOAT4_in' LANGUAGE c IMMUTABLE STRICT; works correctly. The output of % pg_config --pkglibdir /usr/local/pgsql/lib appears to be correct.

    Read the article

  • How to provide a date variable an interval that consists of an integer variable in Postgresql

    - by Lucius Rutilius Lupus
    I am trying to extract an amount of years from a specific date for this the correct syntax is <date> - interval '5 years'; But I dont want to extract a specific amount of years but a variable, which user will provide as a parameter. I have tried the following the variable name is years : date+interval '% years',years; I am getting an error and it doesn't let me do it that way. What would be the right way to do it.

    Read the article

  • Slow insert speed in Postgresql memory tablespace

    - by Prashant
    Hi, I have a requirement where I need to store the records at rate of 10,000 records/sec into a database (with indexing on a few fields). Number of columns in one record is 25. I am doing a batch insert of 100,000 records in one transaction block. To improve the insertion rate, I changed the tablespace from disk to RAM.With that I am able to achieve only 5,000 inserts per second. I have also done the following tuning in the postgres config: Indexes : no fsync : false logging : disabled Other information: - Tablespace : RAM - Number of columns in one row : 25 (mostly integers) - CPU : 4 core, 2.5 GHz - RAM : 48 GB I am wondering why a single insert query is taking around 0.2 msec on average when database is not writing anything on disk (as I am using RAM based tablespace). Is there something I am doing wrong? Help appreciated. Prashant

    Read the article

  • postgresql error - ERROR: input is out of range

    - by CaffeineIV
    The function below keeps returning this error message. I thought that maybe the double_precision field type was what was causing this, and I tried to use CAST, but either that's not it, or I didn't do it right... Help? Here's the error: ERROR: input is out of range CONTEXT: PL/pgSQL function "calculate_distance" line 7 at RETURN ********** Error ********** ERROR: input is out of range SQL state: 22003 Context: PL/pgSQL function "calculate_distance" line 7 at RETURN And here's the function: CREATE OR REPLACE FUNCTION calculate_distance(character varying, double precision, double precision, double precision, double precision) RETURNS double precision AS $BODY$ DECLARE earth_radius double precision; BEGIN earth_radius := 3959.0; RETURN earth_radius * acos(sin($2 / 57.2958) * sin($4 / 57.2958) + cos($2/ 57.2958) * cos($4 / 57.2958) * cos(($5 / 57.2958) - ($3 / 57.2958))); END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100; ALTER FUNCTION calculate_distance(character varying, double precision, double precision, double precision, double precision) OWNER TO postgres; //I tried changing (unsuccessfully) that RETURN line to: RETURN CAST( (earth_radius * acos(sin($2 / 57.2958) * sin($4 / 57.2958) + cos($2/ 57.2958) * cos($4 / 57.2958) * cos(($5 / 57.2958) - ($3 / 57.2958))) ) AS text);

    Read the article

  • PostgreSQL cross server query?

    - by AlexRednic
    Is there a way that I might query a database located on "Server 2" and get my data in "Server 1" ? That is return a set of records from a remote server to my local one. PS: Not cross database query on same server because I know how to do that with dblink.

    Read the article

  • Postgresql 8.4 reading OID style BLOBs with Hibernate

    - by peter
    I am getting this weird case when querying Postgres 8.4 for some records with Blobs (of type OIDs) with Hibernate. The query does return all right but when my code wants to read the content of the BLOB with the simple code below, it gets 0 bytes back public static byte[] readBlob(Blob blob) throws Exception { InputStream is = null; try { is = blob.getBinaryStream(); return org.apache.commons.io.IOUtils.toByteArray(is); } finally { if (is != null) try { is.close(); } catch(Exception e) {} } } Funny think is that I am getting this behavior only since I've started adding more then one such records to the table. The underlying JDBC library is type 3 (postgresq 8.4-701). Can someone give me a hint as to how to solve this issue? Thanks Peter

    Read the article

  • Login Script for PostGreSQL and PHP not working =[

    - by MrEnder
    Ok I'm quite new at logins what not so bare with me here lol but I gota learn so don't discourage me. I tried this so far <?php $error = ""; $conn = pg_connect("host=localhost dbname=brittains_db user=brittains password=XXXX" ); $sql = "SELECT * FROM logins"; $result = pg_query($conn, $sql); if($_SERVER["REQUEST_METHOD"] == "GET") { $userName=""; $password=""; } else if($_SERVER["REQUEST_METHOD"] == "POST") { $userName=trim($_POST["userNameLogin"]); $password=trim($_POST["passwordLogin"]); if(pg_fetch_result($results, $userName, "userName")==true && pg_fetch_result($results, $password, "userName")==true) { setcookie("userIDforDV", $userName, time()+43200); } else { $error = "Your username and or password is incorrect"; } } $userName = $_COOKIE['userIDforDV']; if(isset($userName) && $userName!="") { echo "Welcome " . $userName; } echo $error; ?> <form> <table> <tr> <td class="signupTd"> User Name:&nbsp; </td> <td> <input type="text" name="userNameLogin" value="" size="20" /> </td> </tr> <tr> <td class="signupTd"> Password:&nbsp; </td> <td> <input type="password" name="passwordLogin" value="" size="20" /> </td> </tr> <tr> <td class="signupTd" colspan="2"> <input type="submit" name="submit" value="Submit"/> </td> </tr> </table> </form> that was the idea I came up with... but its prolly a really bad idea and it doesn't work... how might I go about this properly? I need really detailed descriptions please. Thanks a tun Shelby

    Read the article

  • Recording SELECT statements in PostgreSQL 8.4

    - by David Anniwell
    Hi All I've got a table which contains sensitive data and according to data protection policy we have to keep a record of every read/write of the data including a row identifier and the user who accessed the table. The writing is no issue using triggers but obviously triggers aren't supported for SELECT statements. What's the best method of doing this? I've looked at rules but I can't get them to INSERT into a table, and I've tried logging every query but this doesn't seem to log SELECT statements. Ideally for security I'd like to keep the log within a table on the database but logging to a file is fine too. Thanks David

    Read the article

  • Return pre-UPDATE column values in PostgreSQL without using triggers, functions or other "magic"

    - by Python Larry
    I have a related question, but this is another part of MY puzzle. I would like to get the OLD VALUE of a Column from a Row that was UPDATEd... WITHOUT using Triggers (nor Stored Procedures, nor any other extra, non-SQL/-query entities). The query I have is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) RETURNING row_id If I could do "FOR UPDATE ON my_table" at the end of the subquery, that'd be devine (and fix my other question/problem). But, that won't work: can't have this AND a "GROUP BY" (which is necessary for figuring out the COUNT of trans_nbr's). Then I could just take those trans_nbr's and do a query first to get the (soon-to-be-) former processing_by values. I've tried doing like: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table old_my_table JOIN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) sub_my_table ON old_my_table.trans_nbr = sub_my_table.trans_nbr WHERE my_table.trans_nbr = sub_my_table.trans_nbr AND my_table.processing_by = old_my_table.processing_by RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by But that can't work; "old_my_table" is not viewable outside of the join; the RETURNING clause is blind to it. I've long since lost count of all the attempts I've made; I have been researching this for literally hours. If I could just find a bullet-proof way to lock the rows in my subquery - and ONLY those rows, and WHEN the subquery happens - all the concurrency issues I'm trying to avoid disappear... UPDATE: [WIPES EGG OFF FACE] Okay, so I had a typo in the non-generic code of the above that I wrote "doesn't work"; it does... thanks to Erwin Brandstetter, below, who stated it would, I re-did it (after a night's sleep, refreshed eyes, and a banana for bfast). Since it took me so long/hard to find this sort of solution, perhaps my embarrassment is worth it? At least this is on SO for posterity now... : What I now have (that works) is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table AS old_my_table WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(*) > 1 LIMIT our_limit_to_have_single_process_grab ) AND my_table.row_id = old_my_table.row_id RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by AS old_processing_by The COUNT(*) is per a suggestion from Flimzy in a comment on my other (linked above) question. (I was more specific than necessary. [In this instance.])

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >