Search Results

Search found 1505 results on 61 pages for 'postgresql 8 4'.

Page 40/61 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Postgres vs Firebird

    - by Tedi
    I'm looking to use either Firebird or Postgres in my next development project ... largely because both are available under a BSD-like license. I found a great comparison of the two database at http://www.amsoftwaredesign.com/pg_vs_fb But this comparison is a good 2+ years old and both databases have come a long ways since. Does anyone mind updating the comparison table to be relevant for the current versions of both Firebird and Postgres ... or have a link to a site that does a good recent comparison between the two database?

    Read the article

  • Determining the popularity of a video with ratings and views

    - by user295825
    I am about to embark on a new project - a video website. Users will be able to register, and vote on videos by clicking "like" or "dislike", or something to that effect. In any event, it will be a 2-option voting system, not a 5-star system. Every X number of days, I will be generating a "chart" of the most popular videos. So my question is: how should I determine the popularity of a given video? If I went the route of tallying up the videos with the most views, this could have the effect of exceptionally bad videos making it to the of the charts (just because they're so bad). If I go the route of a scoring system based on the amount of "like" and "dislike" votes (eg. 100 like votes, and 50 dislike votes equals a score of 2), videos with few views could appear on the top of the charts. So, what I need to do is a combination of the two. Barring, of course, spammy views and votes. What's your guys' thoughts on the subject?

    Read the article

  • Is this postgres function cost efficient or still have to clean

    - by kiranking
    There are two tables in postgres db. english_all and english_glob First table contains words like international,confidential,booting,cooler ...etc I have written the function to get the words from english_all then perform for loop for each word to get word list which are not inserted in anglish_glob table. Word list is like I In Int Inte Inter .. b bo boo boot .. c co coo cool etc.. for some reason zwnj(zero-width non-joiner) is added during insertion to english_all table. But in function I am removing that character with regexp_replace. Postgres function for_loop_test is taking two parameter min and max based on that I am selecting words from english_all table. function code is like DECLARE inMinLength ALIAS FOR $1; inMaxLength ALIAS FOR $2; mviews RECORD; outenglishListRow english_word_list;--custom data type eng_id,english_text BEGIN FOR mviews IN SELECT id,english_all_text FROM english_all where wlength between inMinLength and inMaxLength ORDER BY english_all_text limit 30 LOOP FOR i IN 1..char_length(regexp_replace(mviews.english_all_text,'(?)$','')) LOOP FOR outenglishListRow IN SELECT distinct on (regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','')) mviews.id, regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') where regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') not in(select english_glob.english_text from english_glob where i=english_glob.wlength) order by regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') LOOP RETURN NEXT outenglishListRow; END LOOP; END LOOP; END LOOP; END; Once I get the word list I will insert that into another table english_glob. My question is is there any thing I can add to or remove from function to make it more efficient. edit Let assume english_all table have words like footer,settle,question,overflow,database,kingdom If inMinLength = 5 and inmaxLength=7 then in the outer loop footer,settle,kingdom will be selected. For above 3 words inner two loop will apply to get words like f,fo,foo,foot,foote,footer,s,se,set,sett,settl .... etc. In the final process words which are bold will be entered into english_glob with another parameter like 1 to denote it is a proper word and stored in the another filed of english_glob table. Remaining word will be stored with another parameter 0 because in the next call words which are saved in database should not be fetched again. edit2: This is a complete code CREATE TABLE english_all ( id serial NOT NULL, english_all_text text NOT NULL, wlength integer NOT NULL, CONSTRAINT english_all PRIMARY KEY (id), CONSTRAINT english_all_kan_text_uq_id UNIQUE (english_all_text) ) CREATE TABLE english_glob ( id serial NOT NULL, english_text text NOT NULL, is_prop integer default 1, CONSTRAINT english_glob PRIMARY KEY (id), CONSTRAINT english_glob_kan_text_uq_id UNIQUE (english_text) ) insert into english_all(english_text) values ('ant'),('forget'),('forgive'); on function call with parameter 3 and 6 fallowing rows should fetched a an ant f fo for forg forge forget next is insert to another table based on above row insert into english_glob(english_text,is_prop) values ('a',1),('an',1), ('ant',1),('f',0), ('fo',0),('for',1), ('forg',0),('forge',1), ('forget',1), on function call next time with parameter 3 and 7 fallowing rows should fetched.(because f,fo,for,forg are all entered in english_glob table) forgi forgiv forgive

    Read the article

  • expand a varchar column very slowly , why?

    - by francs
    Hi We need to modify a column of a big product table , usually normall ddl statments will be excutely fast ,but the above ddl statmens takes about 10 minnutes?I wonder know the reason! I just want to expand a varchar column?The following is the detailsl --table size wapreader_log= select pg_size_pretty(pg_relation_size('log_foot_mark')); pg_size_pretty ---------------- 5441 MB (1 row) --table ddl wapreader_log= \d log_foot_mark Table "wapreader_log.log_foot_mark" Column | Type | Modifiers -------------+-----------------------------+----------- id | integer | not null create_time | timestamp without time zone | sky_id | integer | url | character varying(1000) | refer_url | character varying(1000) | source | character varying(64) | users | character varying(64) | userm | character varying(64) | usert | character varying(64) | ip | character varying(32) | module | character varying(64) | resource_id | character varying(100) | user_agent | character varying(128) | Indexes: "pk_log_footmark" PRIMARY KEY, btree (id) --alter column wapreader_log= \timing Timing is on. wapreader_log= ALTER TABLE wapreader_log.log_foot_mark ALTER column user_agent TYPE character varying(256); ALTER TABLE Time: 603504.835 ms

    Read the article

  • INSERT data from Textbox to Postgres SQL

    - by user1479013
    I just learn how to connect C# and PostgresSQL. I want to INSERT data from tb1(Textbox) and tb2 to database. But I don't know how to code My previous code is SELECT from database. this is my code private void button1_Click(object sender, EventArgs e) { bool blnfound = false; NpgsqlConnection conn = new NpgsqlConnection("Server=127.0.0.1;Port=5432;User Id=postgres;Password=admin123;Database=Login"); conn.Open(); NpgsqlCommand cmd = new NpgsqlCommand("SELECT * FROM login WHERE name='" + tb1.Text + "' and password = '" + tb2.Text + "'",conn); NpgsqlDataReader dr = cmd.ExecuteReader(); if (dr.Read()) { blnfound = true; Form2 f5 = new Form2(); f5.Show(); this.Hide(); } if (blnfound == false) { MessageBox.Show("Name or password is incorrect", "Message Box", MessageBoxButtons.OK, MessageBoxIcon.Exclamation, MessageBoxDefaultButton.Button1); dr.Close(); conn.Close(); } } So please help me the code.

    Read the article

  • INSERT and transaction searilization in PostreSQL

    - by Alexander
    Hello! I have a question. Transaction isolation level set to serializable. When the one user open transaction and INSERT or UPDATE data in "table1" and then another user open transaction and try to INSERT data to the same table is second user need to wait 'til the first user commits the transaction?

    Read the article

  • Postgesql select from 2 tables. Joins?

    - by Daniel
    I have 2 tables that look like this: Table "public.phone_lists" Column | Type | Modifiers ----------+-------------------+-------------------------------------------------------------------- id | integer | not null default nextval(('"phone_lists_id_seq"'::text)::regclass) list_id | integer | not null sequence | integer | not null phone | character varying | name | character varying | and Table "public.email_lists" Column | Type | Modifiers ---------+-------------------+-------------------------------------------------------------------- id | integer | not null default nextval(('"email_lists_id_seq"'::text)::regclass) list_id | integer | not null email | character varying | I'm trying to get the list_id, phone, and emails out of the tables in one table. I'm looking for an output like: list_id | phone | email ---------+-------------+-------------------------------- 0 | | [email protected] 0 | | [email protected] 0 | | [email protected] 0 | | [email protected] 0 | | [email protected] 1 | 15555555555 | 1 | 15555551806 | 1 | 15555555508 | 1 | 15055555506 | 1 | 15055555558 | 1 | | [email protected] 1 | | [email protected] I've come up with select pl.list_id, pl.phone, el.email from phone_lists as pl left join email_lists as el using (list_id); but thats not quite right. Any suggestions?

    Read the article

  • Check if row already exists, if so tell the referenced table the id

    - by flhe
    Let's assume I have a table magazine: CREATE TABLE magazine ( magazine_id integer NOT NULL DEFAULT nextval(('public.magazine_magazine_id_seq'::text)::regclass), longname character varying(1000), shortname character varying(200), issn character varying(9), CONSTRAINT pk_magazine PRIMARY KEY (magazine_id) ); And another table issue: CREATE TABLE issue ( issue_id integer NOT NULL DEFAULT nextval(('public.issue_issue_id_seq'::text)::regclass), number integer, year integer, volume integer, fk_magazine_id integer, CONSTRAINT pk_issue PRIMARY KEY (issue_id), CONSTRAINT fk_magazine_id FOREIGN KEY (fk_magazine_id) REFERENCES magazine (magazine_id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ); Current INSERTS: INSERT INTO magazine (longname,shotname,issn) VALUES ('a long name','ee','1111-2222'); INSERT INTO issue (fk_magazine_id,number,year,volume) VALUES (currval('magazine_magazine_id_seq'),'8','1982','6'); Now a row should only be inserted into 'magazine', if it does not already exist. However if it exists, the table 'issue' needs to get the 'magazine_id' of the row that already exists in order to establish the reference. How can i do this? Thx in advance!

    Read the article

  • PostGres - run a query in batches?

    - by CaffeineIV
    Is it possible to loop through a query so that if (for example) 500,000 rows are found, it'll return results for the first 10,000 and then rerun the query again? So, what I want to do is run a query and build an array, like this: $result = pg_query("SELECT * FROM myTable"); $i = 0; while($row = pg_fetch_array($result) ) { $myArray[$i]['id'] = $row['id']; $myArray[$i]['name'] = $row['name']; $i++; } But, I know that there will be several hundred thousand rows, so I wanted to do it in batches of like 10,000... 1- 9,999 and then 10,000 - 10,999 etc... The reason why is because I keep getting this error: Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 3 bytes) Which, incidentally, I don't understand how 3 bytes could exhaust 512M... So, if that's something that I can just change, that'd be great, although, still might be better to do this in batches?

    Read the article

  • How can I selectively override a django .count() method

    - by Tom Viner
    I'm using postGresSQL and my main table has about 20,000 rows. Sometimes count() methods can take ages or even timeout. Mod.manager.filter(...).count() I need to selectively override the count() method depending on what filter has been applied. Just having a cache of results would be a great gain but I'd like to be able to say: if filter query is just {'enabled'=True} then return 20,000 without touching the db. Note: I can't prevent the call to .count() as it's inside django's pagination, which always does a count.

    Read the article

  • Paginating data, has to be a better way

    - by John Tyler
    I've read like 10 or so "tutorials", and they all involve the same thing: Pull a count of the data set Pull the relevant data set (LIMIT, OFFSET) IE: SELECT COUNT(*) FROM table WHERE something = ? SELECT * FROM table WHERE something =? LIMIT ? offset ?` Two very similar queries, no? There has to be a better way to do this, my dataset is 600,000+ rows and already sluggish (results are determined by over 30 where clauses, and vary from user to user, but are properly indexed of course).

    Read the article

  • Error when pushing data to Heroku: time zone displacement out of range

    - by J. Pablo Fernández
    I run the following command to push the contents of my local database to Heroku: heroku db:push --app my-app and from my home computer it works flawlessly but from my work computer I get this error: Taps Server Error: PGError: ERROR: time zone displacement out of range: "2011-11-15 12:00:00.000000+5894114400" I'm not sure where that date is coming from, I can't find it in the data anywhere. Any ideas what's going on and/or how to fix it? Thanks.

    Read the article

  • PG::Error: ERROR: operator does not exist: integer ~~ unknown

    - by rsvmrk
    I'm making a search-function in a Rails project with Postgres as db. Here's my code def self.search(search) if search find(:all, :conditions => ["LOWER(name) LIKE LOWER(?) OR LOWER(city) LIKE LOWER(?) OR LOWER(address) LIKE LOWER(?) OR (venue_type) LIKE (?)", "%#{search}%", "%#{search}%", "%#{search}%", "%#{search}%"]) else find(:all) end end But my problem is that "venue_type" is an integer. I've made a case switch for venue_type def venue_type_check case self.venue_type when 1 "Pub" when 2 "Nattklubb" end end Now to my question: How can I find something in my query when venue_type is an int?

    Read the article

  • Queries within queries: Is there a better way?

    - by mririgo
    As I build bigger, more advanced web applications, I'm finding myself writing extremely long and complex queries. I tend to write queries within queries a lot because I feel making one call to the database from PHP is better than making several and correlating the data. However, anyone who knows anything about SQL knows about JOINs. Personally, I've used a JOIN or two before, but quickly stopped when I discovered using subqueries because it felt easier and quicker for me to write and maintain. Commonly, I'll do subqueries that may contain one or more subqueries from relative tables. Consider this example: SELECT (SELECT username FROM users WHERE records.user_id = user_id) AS username, (SELECT last_name||', '||first_name FROM users WHERE records.user_id = user_id) AS name, in_timestamp, out_timestamp FROM records ORDER BY in_timestamp Rarely, I'll do subqueries after the WHERE clause. Consider this example: SELECT user_id, (SELECT name FROM organizations WHERE (SELECT organization FROM locations WHERE records.location = location_id) = organization_id) AS organization_name FROM records ORDER BY in_timestamp In these two cases, would I see any sort of improvement if I decided to rewrite the queries using a JOIN? As more of a blanket question, what are the advantages/disadvantages of using subqueries or a JOIN? Is one way more correct or accepted than the other?

    Read the article

  • Add all lines multiplied by another line in another table

    - by russell
    Hi, I hope I can explain this good enough. I have 3 tables. wo_parts, workorders and part2vendor. I am trying to get the cost price of all parts sold in a month. I have this script. $scoreCostQuery = "SELECT SUM(part2vendor.cost*wo_parts.qty) as total_score FROM part2vendor INNER JOIN wo_parts ON (wo_parts.pn=part2vendor.pn) WHERE workorder=$workorder"; What I am trying to do is each part is in wo_parts (under partnumber [pn]). The cost of that item is in part2vendor (under part number[pn]). I need each part price in part2vendor to be multiplied by the quantity sold in wo_parts. The way all 3 tie up is workorders.ident=wo_parts.workorder and part2vendor.pn=wo_parts.pn. I hope someone can assist. The above script does not give me the same total as when added by calculator.

    Read the article

  • Escape SQL "LIKE" value for Postgres with psycopg2

    - by Evgeny
    Does psycopg2 have a function for escaping the value of a LIKE operand for Postgres? For example I may want to match strings that start with the string "20% of all", so I want to write something like this: sql = '... WHERE ... LIKE %(myvalue)s' cursor.fetchall(sql, { 'myvalue': escape_sql_like('20% of all') + '%' } Is there an existing escape_sql_like function that I could plug in here? (Similar question to How to quote a string value explicitly (Python DB API/Psycopg2), but I couldn't find an answer there.)

    Read the article

  • Finding all areas that intersect with a point and vice-versa - PostGIS

    - by ForeignerBR
    I'm developing a project using PostGIS to hold spatial data where I have records that hold geometry point data and records that hold geometry area data. To solve my problem I'm looking for two queries that can take geographic shapes rather than geometric shapes as parameters. For query A I need it to return all points that intersect with a given area. For query B I need it to return all areas that intersect with a given point.

    Read the article

  • Backup of folder + database - Python

    - by RadiantHex
    Hi there, I feel like this is quite delicate, I have various folders whith projects I would like to backup into a zip/tar file, but would like to avoid backing up files such as pyc files and temporary files. I also have a Postgres db I need to backup. Any tips for running this operation as a python script? Also, would there be anyway to stop the process from hogging resources in the process? Help would be very much appreciated.

    Read the article

  • Get the equivalent time between "dynamic" time zones

    - by doctore
    I have a table providers that has three columns (containing more columns but not important in this case): starttime, start time in which you can contact him. endtime, final hour in which you can contact him. region_id, region where the provider resides. In USA: California, Texas, etc. In UK: England, Scotland, etc starttime and endtime are time without timezone columns, but, "indirectly", their value has time zone of the region in which the provider resides. For example: starttime | endtime | region_id (time zone of region) | "real" st | "real" et ----------|----------|---------------------------------|-----------|----------- 03:00:00 | 17:00:00 | 1 (EGT => -1) | 02:00:00 | 16:00:00 Often I need to get the list of suppliers whose time range is within the current server time (taking into account the time zone conversion). The problem is that the time zones aren't "constant", ie, they may change during the summer time. However, this change is very specific to the region and not always carried out at the same time: EGT <= EGST, ART <= ARST, etc. The question is: 1. Is it necessary to use a webservice to update every so often the time zones in the regions? Does anyone know of a web service that can serve? 2. Is there a better approach to solve this problem? Thanks in advance. UPDATE I will give an example to clarify what I'm trying to get. In the table providers I found this records: idproviders | starttime | endtime | region_id ------------|-----------|----------|----------- 1 | 03:00:00 | 17:00:00 | 23 (Texas) 2 | 04:00:00 | 18:00:00 | 23 (Texas) If I execute the query in January, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +1 hour Server time = 02:00:00 I should get the following results: idproviders = 1 If I execute the query in June, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +2 hours (their local time has not changed, but their time zone has changed) Server time = 02:00:00 I should get the following results: idproviders = 1 and 2

    Read the article

  • PropelBundle database:create for postgres

    - by Karol85
    I've installed propel bundle for symfony2. my database configuration is: propel: dbal: driver: pgsql user: postgres password: postgres dsn: pgsql:host=localhost;port=5432;dbname=test_database options: {} attributes: {} When i wan to create this database from console (console propel: database:create) i have got strange error : Unable to open PDO connection [wrapped: SQLSTATE[08006] [7] FATAL: database "pgsql" does not exist. i created pgsql database on my localhost and everything was good. Database "test_database" was succesfull created. Can somebody explain me why i got this previous error? On mysql i've created database without any problems.

    Read the article

  • How to get min/max of two integers in Postgres/SQL?

    - by HRJ
    How do I find the maximum (or minimum) of two integers in Postgres/SQL? One of the integers is not a column value. I will give an example scenario: I would like to subtract an integer from a column (in all rows), but the result should not be less than zero. So, to begin with, I have: UPDATE my_table SET my_column = my_column - 10; But this can make some of the values negative. What I would like (in pseudo code) is: UPDATE my_table SET my_column = MAXIMUM(my_column - 10, 0);

    Read the article

  • Rails: three most recent records by unique belongs_to associated record

    - by Dennis Collective
    class User has_many :comments end class Comment belongs_to :user named_scope :recent, :order => 'comments.created_at DESC' named_scope :limit, lambda { |limit| {:limit => limit}} named_scope :by_unique_users end what would I put in the :by_unique_users so that I can do Comment.recent.by_unique_users.limit(3), and only get one comment per user on sqlite named_scope :by_unique_user, :group = "user_id" works, but makes it freak out on postgres, which is deployed on production PGError: ERROR: column "comments.id" must appear in the GROUP BY clause or be used in an aggregate function

    Read the article

  • PostgeSQL: Arrays Data Type with PHP

    - by ArchJ
    I'm working on PostgeSQL with PHP and I know that PosrgeSQL allow columns of a table to be defined as arrays. So let's say I have a table like this: CREATE TABLE sal_emp ( a text ARRAY, b text ARRAY, c text ARRAY, ); These are my arrays: $a = array(aa,bb,cc); $b = array(dd,dd,aa); $c = array(bb,ff,ee); and I want to insert them into respective column each like this: a | b | c -----------+------------+------------ {aa,bb,cc} | {dd,dd,aa} | {bb,ff,ee} Can I insert it this way? $a = implode(',', $a); $b = implode(',', $b); $c = implode(',', $c); $a = array('a' => $a, 'b' => $b, 'c' => $c); pg_insert($dbconn, 'table', $a); Or is there a better way to achieve the same result?

    Read the article

  • relating data stored in NoSQL DB to data stored in SQL DB

    - by seanbrant
    Whats the best way to use a SQL DB along side a NoSQL DB? I want to keep my users and other data in postgres but have some data that would be better suited for a NoSQL DB like redis. I see a lot of talk about switching to NoSQL but little talk on integrating it with existing systems. I think it would be foolish to throw the baby out with the bath water and ditch SQL all together, unless it makes things easier to maintain and develop. I'm wondering what the best approach is for relating data stored in SQL to my data in redis. I was thinking of something along the line of this. User object stored in SQL Book object in redis, key sh1 hash of value, value is a JSON string Relations stored in redis, key User.pk:books, value redis set of sha1's Anyone have experience, tips, better ways?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >