Search Results

Search found 1505 results on 61 pages for 'postgresql 9 2'.

Page 35/61 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • what is the difference

    - by veilig
    I'm not even sure what this is called? But I'm trying to learn what the difference is between writing a function like this is in plpgsql: CREATE OR REPLACE FUNCTION foo() RETURNS TRIGGER AS $$ .... $$ LANGUAGE plpgsql; vs CREATE OR REPLACE FUNCTION foo() RETURNS TRIGGER AS $foo$ .... $foo$ LANGUAGE plpgsql; is there a difference when using $$ vs $foo$? why would someone choose one over another? perhaps I've just missed some documentation explaining the difference. If someone could enlighten me, I'd really appreciate it.

    Read the article

  • With sqlalchemy how to dynamically bind to database engine on a per-request basis

    - by Peter Hansen
    I have a Pylons-based web application which connects via Sqlalchemy (v0.5) to a Postgres database. For security, rather than follow the typical pattern of simple web apps (as seen in just about all tutorials), I'm not using a generic Postgres user (e.g. "webapp") but am requiring that users enter their own Postgres userid and password, and am using that to establish the connection. That means we get the full benefit of Postgres security. Complicating things still further, there are two separate databases to connect to. Although they're currently in the same Postgres cluster, they need to be able to move to separate hosts at a later date. We're using sqlalchemy's declarative package, though I can't see that this has any bearing on the matter. Most examples of sqlalchemy show trivial approaches such as setting up the Metadata once, at application startup, with a generic database userid and password, which is used through the web application. This is usually done with Metadata.bind = create_engine(), sometimes even at module-level in the database model files. My question is, how can we defer establishing the connections until the user has logged in, and then (of course) re-use those connections, or re-establish them using the same credentials, for each subsequent request. We have this working -- we think -- but I'm not only not certain of the safety of it, I also think it looks incredibly heavy-weight for the situation. Inside the __call__ method of the BaseController we retrieve the userid and password from the web session, call sqlalchemy create_engine() once for each database, then call a routine which calls Session.bind_mapper() repeatedly, once for each table that may be referenced on each of those connections, even though any given request usually references only one or two tables. It looks something like this: # in lib/base.py on the BaseController class def __call__(self, environ, start_response): # note: web session contains {'username': XXX, 'password': YYY} url1 = 'postgres://%(username)s:%(password)s@server1/finance' % session url2 = 'postgres://%(username)s:%(password)s@server2/staff' % session finance = create_engine(url1) staff = create_engine(url2) db_configure(staff, finance) # see below ... etc # in another file Session = scoped_session(sessionmaker()) def db_configure(staff, finance): s = Session() from db.finance import Employee, Customer, Invoice for c in [ Employee, Customer, Invoice, ]: s.bind_mapper(c, finance) from db.staff import Project, Hour for c in [ Project, Hour, ]: s.bind_mapper(c, staff) s.close() # prevents leaking connections between sessions? So the create_engine() calls occur on every request... I can see that being needed, and the Connection Pool probably caches them and does things sensibly. But calling Session.bind_mapper() once for each table, on every request? Seems like there has to be a better way. Obviously, since a desire for strong security underlies all this, we don't want any chance that a connection established for a high-security user will inadvertently be used in a later request by a low-security user.

    Read the article

  • django: can't adapt error when importing data from postgres database

    - by Oleg Tarasenko
    Hi, I'm having strange error with installing fixture from dumped data. I am using psycopg2, and django1.1.1 silver:probsbox oleg$ python manage.py loaddata /Users/oleg/probs.json Installing json fixture '/Users/oleg/probs' from '/Users/oleg/probs'. Problem installing fixture '/Users/oleg/probs.json': Traceback (most recent call last): File "/opt/local/lib/python2.5/site-packages/django/core/management/commands/loaddata.py", line 153, in handle obj.save() File "/opt/local/lib/python2.5/site-packages/django/core/serializers/base.py", line 163, in save models.Model.save_base(self.object, raw=True) File "/opt/local/lib/python2.5/site-packages/django/db/models/base.py", line 495, in save_base result = manager._insert(values, return_id=update_pk) File "/opt/local/lib/python2.5/site-packages/django/db/models/manager.py", line 177, in _insert return insert_query(self.model, values, **kwargs) File "/opt/local/lib/python2.5/site-packages/django/db/models/query.py", line 1087, in insert_query return query.execute_sql(return_id) File "/opt/local/lib/python2.5/site-packages/django/db/models/sql/subqueries.py", line 320, in execute_sql cursor = super(InsertQuery, self).execute_sql(None) File "/opt/local/lib/python2.5/site-packages/django/db/models/sql/query.py", line 2369, in execute_sql cursor.execute(sql, params) File "/opt/local/lib/python2.5/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) ProgrammingError: can't adapt First I've checked similar issues on internet. This one seemed to be very related: http://code.djangoproject.com/ticket/5996, as my data has many non ASCII symbols But actually I've checked my django installation and it's ok there Could you advice what is wrong

    Read the article

  • Regular expression replace in PL/pgSQL

    - by dreamlax
    If I have the following input (excluding quotes): "The ancestral territorial imperatives of the trumpeter swan" How can I collapse all multiple spaces to a single space so that the input is transformed to: "The ancestral territorial imperatives of the trumpeter swan" This is going to be used in a trigger function on insert/update (which already trims leading/trailing spaces). Currently, it raises an exception if the input contains multiple adjacent spaces, but I would rather it simply transforms it into something valid before inserting. What is the best approach? I can't seem to find a regular-expression replace function for PL/pgSQL. There is a text_replace function, but this will only collapse at most two spaces down to one (meaning three consecutive spaces will collapse to two). Calling this function over and over is not ideal.

    Read the article

  • how to move a postgres schema via file operations ?

    - by Jerome WAGNER
    Hello, I have a schema schema1 in a postgres database A. I want to have a duplicate of this schema (model + data) in database B under the name schema2. What are my options ? I currently : * dump schema1 from database A * sed my way through schema renaming in the dump : schema1 becomes schema2 * restore schema2 in database B but I am looking for a more efficient procedure. For instance, via direct file operations on postgres binary files. Thanks for your help Jerome Wagner

    Read the article

  • Run sql script from Ruby

    - by Chaker Nakhli
    Hi all, Using DBI::DatabaseHandle#execute or DBI::DatabaseHandle#prepare it's not possible to run an sql script (with mutiple sql statments). It fails with the following error : ERROR: cannot insert multiple commands into a prepared statement I tried to use the "unprepared" way using DBI::DatabaseHandle#do (the doc says it "goes straight to the DBD‘s implementation") but it keeps throwing the same error. code snippet: require 'dbd/pg' require 'dbi' DBI.connect("dbi:pg:database=dbname", db_user, db_password, db_params) do |dbh| schema = IO::read(schema_file) dbh.do(schema) end I'm using ruby 1.8.6 (2007-09-24 patchlevel 111) [i386-mswin32] dbi-0.4.3 dbd-pg-0.3.9 pg-0.9.0-x86-mswin32 Thank you!

    Read the article

  • Postgre varchar field between

    - by Anton
    I have an addresses table with ZIP code field which has type VARCHAR. I need to select all addresses form this table using ZIP codes range. If I used next code: select * from address where cast(zip as bigint) between 90210 and 90220 I get an error on fields where ZIP code cann't be cast as bigint. How I can resolve this issue?

    Read the article

  • Postgres query optmization

    - by hdx
    Hey guys, trying to optimize this query to solve a duplicate user issue: SELECT userid, 'ismaster' AS name, 'false' AS propvalue FROM user WHERE userid NOT IN (SELECT userid FROM userprop WHERE name = 'ismaster'); The problem is that the select after the NOT IN is 120.000 records and it's taking forever. Any suggestion?

    Read the article

  • JPA - Setting entity class property from calculated column?

    - by growse
    I'm just getting to grips with JPA in a simple Java web app running on Glassfish 3 (Persistence provider is EclipseLink). So far, I'm really liking it (bugs in netbeans/glassfish interaction aside) but there's a thing that I want to be able to do that I'm not sure how to do. I've got an entity class (Article) that's mapped to a database table (article). I'm trying to do a query on the database that returns a calculated column, but I can't figure out how to set up a property of the Article class so that the property gets filled by the column value when I call the query. If I do a regular "select id,title,body from article" query, I get a list of Article objects fine, with the id, title and body properties filled. This works fine. However, if I do the below: Query q = em.createNativeQuery("select id,title,shorttitle,datestamp,body,true as published, ts_headline(body,q,'ShortWord=0') as headline, type from articles,to_tsquery('english',?) as q where idxfti @@ q order by ts_rank(idxfti,q) desc",Article.class); (this is a fulltext search using tsearch2 on Postgres - it's a db-specific function, so I'm using a NativeQuery) You can see I'm fetching a calculated column, called headline. How do I add a headline property to my Article class so that it gets populated by this query? So far, I've tried setting it to be @Transient, but that just ends up with it being null all the time.

    Read the article

  • Postgres update rule returning number of rows affected

    - by Lithium
    I have a view table which is a union of two seperate tables (say Table _A and Table _B). I need to be able to update a row in the view table, and it seems the way to do this was through a 'view rule'. All entries in the view table have seperate id's, so an id that exists in table _A won't exist in table _B. I created the following rule: CREATE OR REPLACE RULE view_update AS ON UPDATE TO viewtable DO INSTEAD ( UPDATE _A SET foo = false WHERE old.id = _A.id; UPDATE _B SET foo = false WHERE old.id = _B.id; ); If I do an update on table _B it returns the correct number of rows affected (1). However if I update table _A it returns (0) rows affected even though the data was changed. If I swap out the order of the updates then the same thing happens, but in reverse. How can I solve this problem so that it returns the correct number of rows affected. Thanks.

    Read the article

  • Django + Postgres: How to specify sequence for a field

    - by Giovanni Di Milia
    I have this model in django: class JournalsGeneral(models.Model): jid = models.AutoField(primary_key=True) code = models.CharField("Code", max_length=50) name = models.CharField("Name", max_length=2000) url = models.URLField("Journal Web Site", max_length=2000, blank=True) online = models.BooleanField("Online?") active = models.BooleanField("Active?") class Meta: db_table = u'journals_general' verbose_name = "Journal General" ordering = ['code'] def __unicode__(self): return self.name My problem is that in the DB (Postgres) the name of the sequence connected to jid is not journals_general_jid_seq as expected by Django but it has a different name. Is there a way to specify which sequence Django has to use for an AutoField? In the documentation I read I was not able to find an answer.

    Read the article

  • sql query - how to count values in a row separately?

    - by n00b0101
    I have a table that looks something like this: id | firstperson | secondperson 1 | jane doe | 2 | bob smith | margie smith 3 | master shifu | madame shifu 4 | max maxwell | I'm trying to count all of the firstpersons + all of the secondpersons, if the secondpersons field isn't blank... Is there a way to do that?

    Read the article

  • Transitive SQL query on same table

    - by MiKu
    Hey. consider d following table and data... in_timestamp | out_timestamp | name | in_id | out_id | in_server | out_server | status timestamp1 | timestamp2 | data1 |id1 | id2 | others-server1 | my-server1 | success timestamp2 | timestamp3 | data1 | id2 | id3 | my-server1 | my-server2 | success timestamp3 | timestamp4 | data1 | id3 | id4 | my-server2 | my-server3 | success timestamp4 | timestamp5 | data1 | id4 | id5 | my-server3 | others-server2 | success the above data represent log of a execution flow of some data across servers. e.g. some data has flowed from some 'outside-server1' to bunch of 'my-servers' and finally to destined 'others-server2'. Question : 1) I need to give this log in representable form to client where he doesn't need to know anything about the bunch of 'my-servers'. All i am supposed to give is timestamp of the data entered my infrastructure and when it left; drilling down to following info. in_timestamp (of 'others_server1' to 'my-server1') out_timestamp (of 'my-server3' to 'others-server2') name status I want to write sql for the same! Can someone help? NOTE : there might not be 3 'my-servers' all the time. It differs from situation to situation. e.g. there might be 4 'my-server' involved for, say, data2! 2) Are there any other alternatives to SQL? I mean stored procs/etc? 3) Optimizations? (The records are huge in number! As of now, it is around 5 million a day. And we are supposed to show records that are upto a week old.) In advance, THANKS FOR THE HELP! :)

    Read the article

  • Case insensitive duplicates SQL

    - by hdx
    So I have a users table where the user.username has many duplicates like: username and Username and useRnAme john and John and jOhn That was a bug and these three records should have been only one. I'm trying to come up with a SQL query that lists all of these cases ordered by their creation date, so ideally the result should be something like this: username jan01 useRnAme jan02 Username jan03 john feb01 John feb02 jOhn feb03 Any suggestions will be much appreciated

    Read the article

  • Postgres: Using function variable names in pgsql function

    - by Peter
    Hi I have written a pgsql function along the lines of what's shown below. How can I get rid of the $1, $2, etc. and replace them with the real argument names to make the function code more readable? Regards Peter CREATE OR REPLACE FUNCTION InsertUser ( UserID UUID, FirstName CHAR(10), Surname VARCHAR(75), Email VARCHAR(75) ) RETURNS void AS $$ INSERT INTO "User" (userid,firstname,surname,email) VALUES ($1,$2,$3,$4) $$ LANGUAGE SQL;

    Read the article

  • Polymorphism in SQL database tables?

    - by Patrick Daryll Glandien
    I currently have multiple tables in my database which consist of the same 'basic fields' like: name character varying(100), description text, url character varying(255) But I have multiple specializations of that basic table, which is for example that tv_series has the fields season, episode, airing, while the movies table has release_date, budget etc. Now at first this is not a problem, but I want to create a second table, called linkgroups with a Foreign Key to these specialized tables. That means I would somehow have to normalize it within itself. One way of solving this I have heard of is to normalize it with a key-value-pair-table, but I do not like that idea since it is kind of a 'database-within-a-database' scheme, I do not have a way to require certain keys/fields nor require a special type, and it would be a huge pain to fetch and order the data later. So I am looking for a way now to 'share' a Primary Key between multiple tables or even better: a way to normalize it by having a general table and multiple specialized tables.

    Read the article

  • Django query: Count and Group BY

    - by Tyler Lane
    I have a query that I'm trying to figure the "django" way of doing it: I want to take the last 100 calls from Call. Which is easy: calls = Call.objects.all().order_by('-call_time')[:100] However the next part I can't find the way to do it via django's ORM. I want to get a list of the call_types and the number of calls each one has WITHIN that previous queryset i just did. Normally i would do a query like this: "SELECT COUNT(id),calltype FROM call WHERE id IN ( SELECT id FROM call ORDER BY call_time DESC LIMIT 100 ) GROUP BY calltype;" I can't seem to find the django way of doing this particular query. Here are my 2 models: class Call( models.Model ): call_time = models.DateTimeField( "Call Time", auto_now = False, auto_now_add = False ) description = models.CharField( max_length = 150 ) response = models.CharField( max_length = 50 ) event_num = models.CharField( max_length = 20 ) report_num = models.CharField( max_length = 20 ) address = models.CharField( max_length = 150 ) zip_code = models.CharField( max_length = 10 ) geom = models.PointField(srid=4326) calltype = models.ForeignKey(CallType) objects = models.GeoManager() class CallType( models.Model ): name = models.CharField( max_length = 50 ) description = models.CharField( max_length = 150 ) active = models.BooleanField() time_init = models.DateTimeField( "Date Added", auto_now = False, auto_now_add = True ) objects = models.Manager()

    Read the article

  • Integer comparison as string

    - by J Pollack
    Hi I have an integer column and I want to find numbers that start with specific digits. For example they do match if I look for '123': 1234567 123456 1234 They do not match: 23456 112345 0123445 Is the only way to handle the task by converting the Integers into Strings before doing string comparison? Also I am using Postgre regexp_replace(text, pattern, replacement) on numbers which is very slow and inefficient way doing it. The case is that I have large amount of data to handle this way and I am looking for the most economical way doing this.

    Read the article

  • Acquiring Table Lock in Database - Interview Question

    - by harigm
    One of my interview Questions, if multiple users across the world are accessing the application, in which it uses a Table which has a Primary Key as Auto Increment Field. The Question how can you prevent the other user getting the Same Primary key when the other user is executing? My answer was I will obtain the Lock on the table and I will make the user to wait Until that user is released with the Primary key. But the Question How do you acquire the Table lock programmatically and implement this? If there are 1000 users coming every minute to the application, if you explicity hold the lock on the table, then the application will become slower? How do you manage this? Please suggest the possible answers for the above question

    Read the article

  • How to call Postgres function returning SETOF record?

    - by Peter
    I have written the following function: -- Gets stats for all markets CREATE OR REPLACE FUNCTION GetMarketStats ( ) RETURNS SETOF record AS $$ BEGIN SELECT 'R approved offer' AS Metric, SUM(CASE WHEN M.MarketName = 'A+' AND M.Term = 24 THEN LO.Amount ELSE 0 end) AS MarketAPlus24, SUM(CASE WHEN M.MarketName = 'A+' AND M.Term = 36 THEN LO.Amount ELSE 0 end) AS MarketAPlus36, SUM(CASE WHEN M.MarketName = 'A' AND M.Term = 24 THEN LO.Amount ELSE 0 end) AS MarketA24, SUM(CASE WHEN M.MarketName = 'A' AND M.Term = 36 THEN LO.Amount ELSE 0 end) AS MarketA36, SUM(CASE WHEN M.MarketName = 'B' AND M.Term = 24 THEN LO.Amount ELSE 0 end) AS MarketB24, SUM(CASE WHEN M.MarketName = 'B' AND M.Term = 36 THEN LO.Amount ELSE 0 end) AS MarketB36 FROM "Market" M INNER JOIN "Listing" L ON L.MarketID = M.MarketID INNER JOIN "ListingOffer" LO ON L.ListingID = LO.ListingID; END $$ LANGUAGE plpgsql; And when trying to call it like this... select * from GetMarketStats() AS (Metric VARCHAR(50),MarketAPlus24 INT,MarketAPlus36 INT,MarketA24 INT,MarketA36 INT,MarketB24 INT,MarketB36 INT); I get an error: ERROR: query has no destination for result data HINT: If you want to discard the results of a SELECT, use PERFORM instead. CONTEXT: PL/pgSQL function "getmarketstats" line 2 at SQL statement I don't understand this output. I've tried using perform too, but I thought one only had to use that if the function doesn't return anything.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >