Search Results

Search found 1505 results on 61 pages for 'postgresql 9 0'.

Page 37/61 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • INSERT and transaction serialization in PostreSQL

    - by Alexander
    I have a question. Transaction isolation level is set to serializable. When the one user opens a transaction and INSERTs or UPDATEs data in "table1" and then another user opens a transaction and tries to INSERT data to the same table, does the second user need to wait 'til the first user commits the transaction?

    Read the article

  • Is there anything as good as TOAD for Postgres (Windows)?

    - by misc090912
    Hi guys, I'm just looking for a management tool like TOAD for Postgres. Anyone used a good one? Edit - I work mostly within the data itself and the database already has a mature model/design. I use the edit windows the most (well, in TOAD for Oracle anyway.) As far as I know, Toad only exists naturally for: Oracle, MS SQL, DB2 and MySQL... --JS

    Read the article

  • Database query optimization

    - by hdx
    Ok my Giant friends once again I seek a little space in your shoulders :P Here is the issue, I have a python script that is fixing some database issues but it is taking way too long, the main update statement is this: cursor.execute("UPDATE jiveuser SET username = '%s' WHERE userid = %d" % (newName,userId)) That is getting called about 9500 times with different newName and userid pairs... Any suggestions on how to speed up the process? Maybe somehow a way where I can do all updates with just one query? Any help will be much appreciated! PS: Postgres is the db being used.

    Read the article

  • Escaping colons in hibernate createSQLQuery

    - by Stratosgear
    I am confused on how I can create an SQL statement containing colons. I am trying to create a view and I am using (notice the double colons): create view MyView as ( SELECT tableA.colA as colA, tableB.colB as colB, round(tableB.colD / 1024)::numeric, 2) as calcValue, FROM tableA, tableB WHERE tableA.colC = 'someValue' ); This is a postgres query and I am forced to use the double colons (::) in order to correctly run the statement. I then pass the above statement through: s.createSQLQuery(myQuery).executeUpdate(); and I get a: Exception in thread "main" org.hibernate.exception.DataException: \ could not execute native bulk manipulation query at org.hibernate.exception.SQLStateConverter.convert(\ SQLStateConverter.java:102) ... more stacktrace... with an output of my above statement changed as (notice the question mark): create view MyView as ( SELECT tableA.colA as colA, tableB.colB as colB, round(tableB.colD / 1024)?, 2) as calcValue, FROM tableA, tableB WHERE tableA.colC = 'someValue' ); Obviously, hibernate confuses my colons with named parameters. Is there a way to escape the colons (a google suggestion that mentions that a single colon is escaped as a double colon does NOT work) or another way of running this statement? Thanks.

    Read the article

  • Possible to rank partial matches in Postgres full text search?

    - by Joe
    I'm trying to calculate a ts_rank for a full-text match where some of the terms in the query may not be in the ts_vector against which it is being matched. I would like the rank to be higher in a match where more words match. Seems pretty simple? Because not all of the terms have to match, I have to | the operands, to give a query such as to_tsquery('one|two|three') (if it was &, all would have to match). The problem is, the rank value seems to be the same no matter how many words match. In other words, it's maxing rather than multiplying the clauses. select ts_rank('one two three'::tsvector, to_tsquery('one')); gives 0.0607927. select ts_rank('one two three'::tsvector, to_tsquery('one|two|three|four')); gives the expected lower value of 0.0455945 because 'four' is not the vector. But select ts_rank('one two three'::tsvector, to_tsquery('one|two')); gives 0.0607927 and likewise select ts_rank('one two three'::tsvector, to_tsquery('one|two|three')); gives 0.0607927 I would like the result of ts_rank to be higher if more terms match. Possible? To counter one possible response: I cannot calculate all possible subsequences of the search query as intersections and then union them all in a query because I am going to be working with large queries. I'm sure there are plenty of arguments against this anyway! Edit: I'm aware of ts_rank_cd but it does not solve the above problem.

    Read the article

  • Ruby on rails active-record generated SQL on Postgres

    - by jpartogi
    Dear all, Why does Ruby on rails generated more queries in the background on Postgres than MySQL? I haven't tried deploying Rails on production with Postgres yet, but I am just afraid this generated queries would affect the performance. Do you find Rails with Postgres is slower than MySQL, knowing that it produce more query on the background? Or it is relatively the same?

    Read the article

  • Fast find near users using PostGIS

    - by opedge
    I have 5 tables: - users - information about user with current location_id (fk to geo_location_data) - geo_location_data - information about location, with PostGIS geography(POINT, 4326) column - user_friends - relationships between users. I want to find near friends for current user, but it takes a lot of time of executing select query to know if user is a friend and after that execute select using ST_DWithin.. May be something wrong in domain model or in queries?

    Read the article

  • postgres counting one record twice if it meets certain criteria

    - by Dashiell0415
    I thought that the query below would naturally do what I explain, but apparently not... My table looks like this: id | name | g | partner | g2 1 | John | M | Sam | M 2 | Devon | M | Mike | M 3 | Kurt | M | Susan | F 4 | Stacy | F | Bob | M 5 | Rosa | F | Rita | F I'm trying to get the id where either the g or g2 value equals 'M'... But, a record where both the g and g2 values are 'M' should return two lines, not 1. So, in the above sample data, I'm trying to return: $q = pg_query("SELECT id FROM mytable WHERE ( g = 'M' OR g2 = 'M' )"); 1 1 2 2 3 4 But, it always returns: 1 2 3 4

    Read the article

  • Postgres: clear entire database before re-creating / re-populating from bash script

    - by Hoff
    hi folks, I'm writing a shell script (will become a cronjob) that will: 1: dump my production database 2: import the dump into my development database Between step 1 and 2, I need to clear the development database (drop all tables?). How is this best accomplished from a shell script? So far, it looks like this: #!/bin/bash time=`date '+%Y'-'%m'-'%d'` # 1. export(dump) the current production database pg_dump -U production_db_name > /backup/dir/backup-${time}.sql # missing step: drop all tables from development database so it can be re-populated # 2. load the backup into the development database psql -U development_db_name < backup/dir/backup-${time}.sql Many thanks in advance! Martin

    Read the article

  • Funny characters in my db

    - by hdx
    My web app is breaking when I try edit a certain content type and I'm pretty sure it is because of some weird characters in my database. So when I do: SELECT body FROM message WHERE id = 666 it returns: <p>⢠<span></span></p><p><br /></p><p><em><strong>NOTE:</strong> Please remember to use your to participate in the discussion.</em></p> However when I try to count how many documents have those characters postgres complains: foo_450_prod=# SELECT COUNT(*) FROM message WHERE body LIKE'%â¢%'; ERROR: invalid byte sequence for encoding "UTF8": 0xe2a225 HINT: This error can also happen if the byte sequence does not match the encodi Does anybody know what the issue is and how I can query for those funny characters? Thanks in advance!

    Read the article

  • Postgres: Convert varchar to text

    - by williamjones
    I screwed up and created a column as a varchar(255) where that is no longer sufficient. I've read that varchar has no performance benefits over text on Postgres, and so would like to convert the varchar to a text column in a safe way that preserves the data. What's the best way for me to do this?

    Read the article

  • Foreign/accented characters in sql query

    - by FromCanada
    I'm using Java and Spring's JdbcTemplate class to build an SQL query in Java that queries a Postgres database. However, I'm having trouble executing queries that contain foreign/accented characters. For example the (trimmed) code: JdbcTemplate select = new JdbcTemplate( postgresDatabase ); String query = "SELECT id FROM province WHERE name = 'Ontario';"; Integer id = select.queryForObject( query, Integer.class ); will retrieve the province id, but if instead I did name = 'Québec' then the query fails to return any results (this value is in the database so the problem isn't that it's missing). I believe the source of the problem is that the database I am required to use has the default client encoding set to SQL_ASCII, which according to this prevents automatic character set conversions. (The Java environments encoding is set to 'UTF-8' while I'm told the database uses 'LATIN1' / 'ISO-8859-1') I was able to manually indicate the encoding when the resultSets contained values with foreign characters as a solution to a previous problem with a similar nature. Ex: String provinceName = new String ( resultSet.getBytes( "name" ), "ISO-8859-1" ); But now that the foreign characters are part of the query itself this approach hasn't been successful. (I suppose since the query has to be saved in a String before being executed anyway, breaking it down into bytes and then changing the encoding only muddles the characters further.) Is there a way around this without having to change the properties of the database or reconstruct it? PostScript: I found this function on StackOverflow when making up a title, it didn't seem to work (I might not have used it correctly, but even if it did work it doesn't seem like it could be the best solution.):

    Read the article

  • Optimum size of transaction in Postgres?

    - by Joe
    I'm running a process that does a lot of updates ( 100,000) to a table. I have the choice between putting all the updates in a single transaction or committing transactions every 1000 or so. Ignore for the moment the case where a transaction fails and is aborted. I'm interested in the best size of transaction for memory and speed efficiency.

    Read the article

  • How to optimize this SQL query for a rectangular region?

    - by Andrew B.
    I'm trying to optimize the following query, but it's not clear to me what index or indexes would be best. I'm storing tiles in a two-dimensional plane and querying for rectangular regions of that plane. The table has, for the purposes of this question, the following columns: id: a primary key integer world_id: an integer foreign key which acts as a namespace for a subset of tiles tileY: the Y-coordinate integer tileX: the X-coordinate integer value: the contents of this tile, a varchar if it matters. I have the following indexes: "ywot_tile_pkey" PRIMARY KEY, btree (id) "ywot_tile_world_id_key" UNIQUE, btree (world_id, "tileY", "tileX") "ywot_tile_world_id" btree (world_id) And this is the query I'm trying to optimize: ywot=> EXPLAIN ANALYZE SELECT * FROM "ywot_tile" WHERE ("world_id" = 27685 AND "tileY" <= 6 AND "tileX" <= 9 AND "tileX" >= -2 AND "tileY" >= -1 ); QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on ywot_tile (cost=11384.13..149421.27 rows=65989 width=168) (actual time=79.646..80.075 rows=96 loops=1) Recheck Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) -> Bitmap Index Scan on ywot_tile_world_id_key (cost=0.00..11367.63 rows=65989 width=0) (actual time=79.615..79.615 rows=125 loops=1) Index Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) Total runtime: 80.194 ms So the world is fixed, and we are querying for a rectangular region of tiles. Some more information that might be relevant: All the tiles for a queried region may or may not be present The height and width of a queried rectangle are typically about 10x10-20x20 For any given (world, X) or (world, Y) pair, there may be an unbounded number of matching tiles, but the worst case is currently around 10,000, and typically there are far fewer. New tiles are created far less frequently than existing ones are updated (changing the 'value'), and that itself is far less frequent that just reading as in the query above. The only thing I can think of would be to index on (world, X) and (world, Y). My guess is that the database would be able to take those two sets and intersect them. The problem is that there is a potentially unbounded number of matches for either for either of those. Is there some other kind of index that would be more appropriate?

    Read the article

  • Restart of Master Postgres DB with unconsumed Wal files

    - by Douglas Sellers
    We have a situation where walmanager is being used to ship wal files between a master and a slave Postgres database. The slave machine has failed and has had to have been rebuilt. This has caused a lot of unconsumed wal files to build up on the master. If a reboot is issued to the Postgres master, and there are 24 hours worth of unconsumed wal files hanging around, will the master be effected at all or will it start clean?

    Read the article

  • Postgres column casting...

    - by Simon
    I have a query SELECT assetid, type_code, version, name, short_name, status, languages, charset, force_secure, created, created_userid, updated, updated_userid, published, published_userid, status_changed, status_changed_userid FROM sq_ast WHERE assetid = 7 which doesn't work and throws ERROR: operator does not exist: character varying = integer LINE 4: FROM sq_ast WHERE assetid = 7 I can get it to work by doing SELECT assetid, type_code, version, name, short_name, status, languages, charset, force_secure, created, created_userid, updated, updated_userid, published, published_userid, status_changed, status_changed_userid FROM sq_ast WHERE assetid = '7' Please note the quoting of the 7 in the WHERE clause... I am deploying an huge application and I cannot rewrite the core... similarly I don't want to risk changing the type of the column... I'm no Postgres expert... please help... Is there an option for strict casting of columns???

    Read the article

  • ROR heroku PostGres issue

    - by oelbrenner
    getting error: ActiveRecord::StatementInvalid (PGError: ERROR: argument of HAVING must be type boolean, not type timestamp without time zone controller code snippet: def inactive @number_days = params[:days].to_i || 90 @clients = Client.find(:all, :include = :appointments, :conditions = ["clients.user_id = ? AND appointments.start_time <= ?", current_user.id, @number_days.days.ago], :group = 'client_id', :having = 'MAX(appointments.start_time)' ) end

    Read the article

  • postgres stored procedure problem

    - by easyrider
    Hi all, Ich have a problem in postgres function: CREATE OR REPLACE FUNCTION getVar(id bigint) RETURNS TABLE (repoid bigint, suf VARCHAR, nam VARCHAR) AS $$ declare rec record; BEGIN FOR rec IN (WITH RECURSIVE children(repoobjectid,variant_of_object_fk, suffix, variantname) AS ( SELECT repoobjectid, variant_of_object_fk, '' as suffix,variantname FROM b2m.repoobject_tab WHERE repoobjectid = id UNION ALL SELECT repo.repoobjectid, repo.variant_of_object_fk, suffix || '..' , repo.variantname FROM b2m.repoobject_tab repo, children WHERE children.repoobjectid = repo.variant_of_object_fk) SELECT repoobjectid,suffix,variantname FROM children) LOOP RETURN next; END LOOP; RETURN; END; It can be compiled, but if y try to call it select * from getVar(18) I got 8 empty rows with 3 columns. If i execute the following part of procedure with hard-coded id parameter: WITH RECURSIVE children(repoobjectid,variant_of_object_fk, suffix, variantname) AS ( SELECT repoobjectid, variant_of_object_fk, '' as suffix,variantname FROM b2m.repoobject_tab WHERE repoobjectid = 18 UNION ALL SELECT repo.repoobjectid, repo.variant_of_object_fk, suffix || '..' , repo.variantname FROM b2m.repoobject_tab repo, children WHERE children.repoobjectid = repo.variant_of_object_fk) SELECT repoobjectid,suffix,variantname FROM children I got exactly, what i need 8 rows with data: repoobjectid suffix variantname 18 19 .. for IPhone 22 .. for Nokia 23 .... OS 1.0 and so on. What is going wrong ? Please help. Thanx in advance

    Read the article

  • Use Django ORM as standalone [closed]

    - by KeyboardInterrupt
    Possible Duplicates: Use only some parts of Django? Using only the DB part of Django I want to use the Django ORM as standalone. Despite an hour of searching Google, I'm still left with several questions: Does it require me to set up my Python project with a setting.py, /myApp/ directory, and modules.py file? Can I create a new models.py and run syncdb to have it automatically setup the tables and relationships or can I only use models from existing Django projects? There seems to be a lot of questions regarding PYTHONPATH. If you're not calling existing models is this needed? I guess the easiest thing would be for someone to just post a basic template or walkthrough of the process, clarifying the organization of the files e.g.: db/ __init__.py settings.py myScript.py orm/ __init__.py models.py And the basic essentials: # settings.py from django.conf import settings settings.configure( DATABASE_ENGINE = "postgresql_psycopg2", DATABASE_HOST = "localhost", DATABASE_NAME = "dbName", DATABASE_USER = "user", DATABASE_PASSWORD = "pass", DATABASE_PORT = "5432" ) # orm/models.py # ... # myScript.py # import models.. And whether you need to run something like: django-admin.py inspectdb ... (Oh, I'm running Windows if that changes anything regarding command-line arguments.).

    Read the article

  • SQL statement to split a table based on a join

    - by williamjones
    I have a primary table for Articles that is linked by a join table Info to a table Tags that has only a small number of entries. I want to split the Articles table, by either deleting rows or creating a new table with only the entries I want, based on the absence of a link to a certain tag. There are a few million articles. How can I do this? Not all of the articles have any tag at all, and some have many tags. Example: table Articles primary_key id table Info foreign_key article_id foreign_key tag_id table Tags primary_key id It was easy for me to segregate the articles that do have the match right off the bat, so I thought maybe I could do that and then use a NOT IN statement but that is so slow running it's unclear if it's ever going to finish. I did that with these commands: INSERT INTO matched_articles SELECT * FROM articles a LEFT JOIN info i ON a.id = i.article_id WHERE i.tag_id = 5; INSERT INTO unmatched_articles SELECT * FROM articles a WHERE a.id NOT IN (SELECT m.id FROM matched_articles m); If it makes a difference, I'm on Postgres.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >