Search Results

Search found 8664 results on 347 pages for 'rails postgresql'.

Page 221/347 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Run sql script from Ruby

    - by Chaker Nakhli
    Hi all, Using DBI::DatabaseHandle#execute or DBI::DatabaseHandle#prepare it's not possible to run an sql script (with mutiple sql statments). It fails with the following error : ERROR: cannot insert multiple commands into a prepared statement I tried to use the "unprepared" way using DBI::DatabaseHandle#do (the doc says it "goes straight to the DBD‘s implementation") but it keeps throwing the same error. code snippet: require 'dbd/pg' require 'dbi' DBI.connect("dbi:pg:database=dbname", db_user, db_password, db_params) do |dbh| schema = IO::read(schema_file) dbh.do(schema) end I'm using ruby 1.8.6 (2007-09-24 patchlevel 111) [i386-mswin32] dbi-0.4.3 dbd-pg-0.3.9 pg-0.9.0-x86-mswin32 Thank you!

    Read the article

  • how to move a postgres schema via file operations ?

    - by Jerome WAGNER
    Hello, I have a schema schema1 in a postgres database A. I want to have a duplicate of this schema (model + data) in database B under the name schema2. What are my options ? I currently : * dump schema1 from database A * sed my way through schema renaming in the dump : schema1 becomes schema2 * restore schema2 in database B but I am looking for a more efficient procedure. For instance, via direct file operations on postgres binary files. Thanks for your help Jerome Wagner

    Read the article

  • Postgre varchar field between

    - by Anton
    I have an addresses table with ZIP code field which has type VARCHAR. I need to select all addresses form this table using ZIP codes range. If I used next code: select * from address where cast(zip as bigint) between 90210 and 90220 I get an error on fields where ZIP code cann't be cast as bigint. How I can resolve this issue?

    Read the article

  • Postgres query optmization

    - by hdx
    Hey guys, trying to optimize this query to solve a duplicate user issue: SELECT userid, 'ismaster' AS name, 'false' AS propvalue FROM user WHERE userid NOT IN (SELECT userid FROM userprop WHERE name = 'ismaster'); The problem is that the select after the NOT IN is 120.000 records and it's taking forever. Any suggestion?

    Read the article

  • Postgres update rule returning number of rows affected

    - by Lithium
    I have a view table which is a union of two seperate tables (say Table _A and Table _B). I need to be able to update a row in the view table, and it seems the way to do this was through a 'view rule'. All entries in the view table have seperate id's, so an id that exists in table _A won't exist in table _B. I created the following rule: CREATE OR REPLACE RULE view_update AS ON UPDATE TO viewtable DO INSTEAD ( UPDATE _A SET foo = false WHERE old.id = _A.id; UPDATE _B SET foo = false WHERE old.id = _B.id; ); If I do an update on table _B it returns the correct number of rows affected (1). However if I update table _A it returns (0) rows affected even though the data was changed. If I swap out the order of the updates then the same thing happens, but in reverse. How can I solve this problem so that it returns the correct number of rows affected. Thanks.

    Read the article

  • JPA - Setting entity class property from calculated column?

    - by growse
    I'm just getting to grips with JPA in a simple Java web app running on Glassfish 3 (Persistence provider is EclipseLink). So far, I'm really liking it (bugs in netbeans/glassfish interaction aside) but there's a thing that I want to be able to do that I'm not sure how to do. I've got an entity class (Article) that's mapped to a database table (article). I'm trying to do a query on the database that returns a calculated column, but I can't figure out how to set up a property of the Article class so that the property gets filled by the column value when I call the query. If I do a regular "select id,title,body from article" query, I get a list of Article objects fine, with the id, title and body properties filled. This works fine. However, if I do the below: Query q = em.createNativeQuery("select id,title,shorttitle,datestamp,body,true as published, ts_headline(body,q,'ShortWord=0') as headline, type from articles,to_tsquery('english',?) as q where idxfti @@ q order by ts_rank(idxfti,q) desc",Article.class); (this is a fulltext search using tsearch2 on Postgres - it's a db-specific function, so I'm using a NativeQuery) You can see I'm fetching a calculated column, called headline. How do I add a headline property to my Article class so that it gets populated by this query? So far, I've tried setting it to be @Transient, but that just ends up with it being null all the time.

    Read the article

  • sql query - how to count values in a row separately?

    - by n00b0101
    I have a table that looks something like this: id | firstperson | secondperson 1 | jane doe | 2 | bob smith | margie smith 3 | master shifu | madame shifu 4 | max maxwell | I'm trying to count all of the firstpersons + all of the secondpersons, if the secondpersons field isn't blank... Is there a way to do that?

    Read the article

  • Transitive SQL query on same table

    - by MiKu
    Hey. consider d following table and data... in_timestamp | out_timestamp | name | in_id | out_id | in_server | out_server | status timestamp1 | timestamp2 | data1 |id1 | id2 | others-server1 | my-server1 | success timestamp2 | timestamp3 | data1 | id2 | id3 | my-server1 | my-server2 | success timestamp3 | timestamp4 | data1 | id3 | id4 | my-server2 | my-server3 | success timestamp4 | timestamp5 | data1 | id4 | id5 | my-server3 | others-server2 | success the above data represent log of a execution flow of some data across servers. e.g. some data has flowed from some 'outside-server1' to bunch of 'my-servers' and finally to destined 'others-server2'. Question : 1) I need to give this log in representable form to client where he doesn't need to know anything about the bunch of 'my-servers'. All i am supposed to give is timestamp of the data entered my infrastructure and when it left; drilling down to following info. in_timestamp (of 'others_server1' to 'my-server1') out_timestamp (of 'my-server3' to 'others-server2') name status I want to write sql for the same! Can someone help? NOTE : there might not be 3 'my-servers' all the time. It differs from situation to situation. e.g. there might be 4 'my-server' involved for, say, data2! 2) Are there any other alternatives to SQL? I mean stored procs/etc? 3) Optimizations? (The records are huge in number! As of now, it is around 5 million a day. And we are supposed to show records that are upto a week old.) In advance, THANKS FOR THE HELP! :)

    Read the article

  • Django + Postgres: How to specify sequence for a field

    - by Giovanni Di Milia
    I have this model in django: class JournalsGeneral(models.Model): jid = models.AutoField(primary_key=True) code = models.CharField("Code", max_length=50) name = models.CharField("Name", max_length=2000) url = models.URLField("Journal Web Site", max_length=2000, blank=True) online = models.BooleanField("Online?") active = models.BooleanField("Active?") class Meta: db_table = u'journals_general' verbose_name = "Journal General" ordering = ['code'] def __unicode__(self): return self.name My problem is that in the DB (Postgres) the name of the sequence connected to jid is not journals_general_jid_seq as expected by Django but it has a different name. Is there a way to specify which sequence Django has to use for an AutoField? In the documentation I read I was not able to find an answer.

    Read the article

  • Case insensitive duplicates SQL

    - by hdx
    So I have a users table where the user.username has many duplicates like: username and Username and useRnAme john and John and jOhn That was a bug and these three records should have been only one. I'm trying to come up with a SQL query that lists all of these cases ordered by their creation date, so ideally the result should be something like this: username jan01 useRnAme jan02 Username jan03 john feb01 John feb02 jOhn feb03 Any suggestions will be much appreciated

    Read the article

  • Postgres: Using function variable names in pgsql function

    - by Peter
    Hi I have written a pgsql function along the lines of what's shown below. How can I get rid of the $1, $2, etc. and replace them with the real argument names to make the function code more readable? Regards Peter CREATE OR REPLACE FUNCTION InsertUser ( UserID UUID, FirstName CHAR(10), Surname VARCHAR(75), Email VARCHAR(75) ) RETURNS void AS $$ INSERT INTO "User" (userid,firstname,surname,email) VALUES ($1,$2,$3,$4) $$ LANGUAGE SQL;

    Read the article

  • Polymorphism in SQL database tables?

    - by Patrick Daryll Glandien
    I currently have multiple tables in my database which consist of the same 'basic fields' like: name character varying(100), description text, url character varying(255) But I have multiple specializations of that basic table, which is for example that tv_series has the fields season, episode, airing, while the movies table has release_date, budget etc. Now at first this is not a problem, but I want to create a second table, called linkgroups with a Foreign Key to these specialized tables. That means I would somehow have to normalize it within itself. One way of solving this I have heard of is to normalize it with a key-value-pair-table, but I do not like that idea since it is kind of a 'database-within-a-database' scheme, I do not have a way to require certain keys/fields nor require a special type, and it would be a huge pain to fetch and order the data later. So I am looking for a way now to 'share' a Primary Key between multiple tables or even better: a way to normalize it by having a general table and multiple specialized tables.

    Read the article

  • Integer comparison as string

    - by J Pollack
    Hi I have an integer column and I want to find numbers that start with specific digits. For example they do match if I look for '123': 1234567 123456 1234 They do not match: 23456 112345 0123445 Is the only way to handle the task by converting the Integers into Strings before doing string comparison? Also I am using Postgre regexp_replace(text, pattern, replacement) on numbers which is very slow and inefficient way doing it. The case is that I have large amount of data to handle this way and I am looking for the most economical way doing this.

    Read the article

  • Django query: Count and Group BY

    - by Tyler Lane
    I have a query that I'm trying to figure the "django" way of doing it: I want to take the last 100 calls from Call. Which is easy: calls = Call.objects.all().order_by('-call_time')[:100] However the next part I can't find the way to do it via django's ORM. I want to get a list of the call_types and the number of calls each one has WITHIN that previous queryset i just did. Normally i would do a query like this: "SELECT COUNT(id),calltype FROM call WHERE id IN ( SELECT id FROM call ORDER BY call_time DESC LIMIT 100 ) GROUP BY calltype;" I can't seem to find the django way of doing this particular query. Here are my 2 models: class Call( models.Model ): call_time = models.DateTimeField( "Call Time", auto_now = False, auto_now_add = False ) description = models.CharField( max_length = 150 ) response = models.CharField( max_length = 50 ) event_num = models.CharField( max_length = 20 ) report_num = models.CharField( max_length = 20 ) address = models.CharField( max_length = 150 ) zip_code = models.CharField( max_length = 10 ) geom = models.PointField(srid=4326) calltype = models.ForeignKey(CallType) objects = models.GeoManager() class CallType( models.Model ): name = models.CharField( max_length = 50 ) description = models.CharField( max_length = 150 ) active = models.BooleanField() time_init = models.DateTimeField( "Date Added", auto_now = False, auto_now_add = True ) objects = models.Manager()

    Read the article

  • Acquiring Table Lock in Database - Interview Question

    - by harigm
    One of my interview Questions, if multiple users across the world are accessing the application, in which it uses a Table which has a Primary Key as Auto Increment Field. The Question how can you prevent the other user getting the Same Primary key when the other user is executing? My answer was I will obtain the Lock on the table and I will make the user to wait Until that user is released with the Primary key. But the Question How do you acquire the Table lock programmatically and implement this? If there are 1000 users coming every minute to the application, if you explicity hold the lock on the table, then the application will become slower? How do you manage this? Please suggest the possible answers for the above question

    Read the article

  • Ubuntu server very slow out of the blue sky (Rails, passenger, nginx)

    - by snitko
    I run Ubuntu server 8.04 on Linode with multiple Rails apps under Passenger + nginx. Today I've noticed it takes quite a lot of time to load a page (5-10 secs). And it's not only websites, ssh seems to be affected too. Having no clue why this may be happening, I started to check different things. I checked how the log files are rotated, I checked if there's enough free disk space and memory. I also checked IO rate, here's the output: $ iostat avg-cpu: %user %nice %system %iowait %steal %idle 0.17 0.00 0.02 0.57 0.16 99.07 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvda 2.25 39.50 16.08 147042 59856 xvdb 0.00 0.05 0.00 192 0 xvdc 2.20 25.93 24.93 96530 92808 xvdd 0.01 0.12 0.00 434 16 xvde 0.04 0.23 0.35 858 1304 xvdf 0.37 0.31 4.12 1162 15352 Rebooting didn't help either. Any ideas where should I be looking?

    Read the article

  • How to call Postgres function returning SETOF record?

    - by Peter
    I have written the following function: -- Gets stats for all markets CREATE OR REPLACE FUNCTION GetMarketStats ( ) RETURNS SETOF record AS $$ BEGIN SELECT 'R approved offer' AS Metric, SUM(CASE WHEN M.MarketName = 'A+' AND M.Term = 24 THEN LO.Amount ELSE 0 end) AS MarketAPlus24, SUM(CASE WHEN M.MarketName = 'A+' AND M.Term = 36 THEN LO.Amount ELSE 0 end) AS MarketAPlus36, SUM(CASE WHEN M.MarketName = 'A' AND M.Term = 24 THEN LO.Amount ELSE 0 end) AS MarketA24, SUM(CASE WHEN M.MarketName = 'A' AND M.Term = 36 THEN LO.Amount ELSE 0 end) AS MarketA36, SUM(CASE WHEN M.MarketName = 'B' AND M.Term = 24 THEN LO.Amount ELSE 0 end) AS MarketB24, SUM(CASE WHEN M.MarketName = 'B' AND M.Term = 36 THEN LO.Amount ELSE 0 end) AS MarketB36 FROM "Market" M INNER JOIN "Listing" L ON L.MarketID = M.MarketID INNER JOIN "ListingOffer" LO ON L.ListingID = LO.ListingID; END $$ LANGUAGE plpgsql; And when trying to call it like this... select * from GetMarketStats() AS (Metric VARCHAR(50),MarketAPlus24 INT,MarketAPlus36 INT,MarketA24 INT,MarketA36 INT,MarketB24 INT,MarketB36 INT); I get an error: ERROR: query has no destination for result data HINT: If you want to discard the results of a SELECT, use PERFORM instead. CONTEXT: PL/pgSQL function "getmarketstats" line 2 at SQL statement I don't understand this output. I've tried using perform too, but I thought one only had to use that if the function doesn't return anything.

    Read the article

  • Why would I get a duplicate key error when updating a row?

    - by hdx
    I'm using postgres and I'm getting the duplicate key error when updating a row: cursor.execute("UPDATE jiveuser SET userenabled = 0 WHERE userid = %s" % str(userId)) psycopg2.IntegrityError: duplicate key value violates unique constraint "jiveuser_pk" I don't understand how updating a row can cause this error... any help will be much appreciated.

    Read the article

  • Delphi, PGDac vs Zeos, Fetch, Lookup?

    - by durumdara
    Hi! I used Zeos to test to know: is ZTable uses fetch technics, or not? May in the future we migrate our lesser system to PGSQL, and this used now "Table" components (as BDE, but it have an SQL-like server). These tables use real cursors, a "Window" with N record, so lookup is very fast, because the Locate/Lookup is started on server, and only these N records are refreshed, no matter, how many records in the lookup table. PGSQL uses fetch technics as I know, and I tested it with a table (id int, name varchar(100)), and 1 million records. (I also trying this with mysql). The adapter is Zeos. ID, sec to find, allocated memory in bytes on client. MySQL 500000 2,761 113 196 344 1000000 3,214 225 471 232 313800 0,437 225 471 232 328066 0,468 225 471 232 276374 0,390 225 471 232 905984 1,264 225 471 232 260253 0,359 225 471 232 PGSQL 500000 3,042 113 188 184 1000000 3,744 225 463 064 313800 0,436 225 463 064 328066 0,452 225 463 064 276374 0,375 225 463 064 905984 1,295 225 463 064 260253 0,359 225 463 064 142023 0,203 225 463 064 As you see the records are fetched locally, this cause the 225 MB usage, and searches are slow a little, based where is the record we must find. I want to ask more things: a.) Is PGDAC have some technics to we can use the lookups without pay the fetch with memory and secs? b.) Or is PG ODBC driver can help in this problem with ADO? (As I know ADO can use server side cursors)? c.) Have anybody some experience with lookup tables, and performance? Is this critical question or it is not? (With client memory usage too). d.) If no chance to avoid fetch hell with lookups, what we can do? Server Side Joins, and unique code for Lookup field changing without real Lookup? Thanks for your help: dd

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >