Search Results

Search found 1505 results on 61 pages for 'postgresql'.

Page 29/61 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Is it safe to set MySQL isolation to "Read Uncommitted" (dirty reads) for typical Web usage? Even with replication?

    - by Continuation
    I'm working on a website with typical CRUD web usage pattern: similar to blogs or forums where users create/update contents and other users read the content. Seems like it's OK to set the database's isolation level to "Read Uncommitted" (dirty reads) in this case. My understanding of the general drawback of "Read Uncommitted" is that a reader may read uncommitted data that will later be rollbacked. In a CRUD blog/forum usage pattern, will there ever be any rollback? And even if there is, is there any major problem with reading uncommitted data? Right now I'm not using any replication, but in the future if I want to use replication (row-based, not statement-based) will a "Read Uncommitted" isolation level prevent me from doing so? What do you think? Has anyone tried using "Read Uncommitted" on their RDBMS?

    Read the article

  • Postgres 8.3 fails to restart as a service on VMS and Server 2003

    - by Woot4Moo
    Currently I am experiencing an issue with a Postgres 8.3 install wherein after a system restart a service is unable. The error message is as follows: waiting for server to start...Access is denied. ............................................................could not start server The command being executed is pg_ctl.exe start -N "MyService" -D "C:\MyData" I am logged in and executing this as an administrator. The issue originally happened after uninstalling and reinstalling postgres, the /data directory was removed as well.

    Read the article

  • Sed issue with numbers exceeding 9

    - by Imane Fateh
    May be my problem is kinda obvious for you but I really need to get a solution. I need to generate a file.sql file from a file.csv, so I use this command : cat file.csv |sed "s/\(.*\),\(.*\)/insert into table(value1, value2) values\('\1','\2'\);/g" > file.sql It works perfectly, but when the values exceed 9 (for example for \10, \11 etc...) it takes consideration of only the first number (which is \1 in this case) and ignores the rest. I want to know if I missed something or if there is another way to do it. Thank you ! EDIT : The not working example : My file.csv looks like 2013-04-01 04:00:52,2,37,74,40233964,3860,0,0,4878,174,3,0,0,3598,27.00,27 What I get insert into table val1,val2,val3,val4,val5,val6,val7,val8,val9,val10,val11,val12,val13,val14,val15,val16 values ('2013-04-01 07:39:43',2,37,74,36526530,3877,0,0,6080,2013-04-01 07:39:430,2013-04-01 07:39:431,2013-04-01 07:39:432,2013-04-01 07:39:433,2013-04-01 07:39:434,2013-04-01 07:39:435,2013-04-01 07:39:436); After the ninth element I get the first one instead of the 10th,11th etc...

    Read the article

  • Is the community MySQL safe for production use?

    - by n_kips
    Or Will I need to get the enterprise version? This is because I found this on MySQL's site: If you are running a MySQL production level system, we would like to direct your attention to the product description of MySQL Enterprise Edition at: http://mysql.com/products/enterprise/ When I check the features, it seems like the community edition does not support transactions, while the enterprise version does. If it is true that the community edition is not right for production, then it seems like posgresql may be my way out, for it supports transactions and it is fully opensource. Will the sql syntax need to change (much) if I have to change? Thank you.

    Read the article

  • Rails Resque workers fail with PGError: server closed the connection unexpectedly

    - by gc
    I have site running rails application and resque workers running in production mode, on Ubuntu 9.10, Rails 2.3.4, ruby-ee 2010.01, PostgreSQL 8.4.2 Workers constantly raised errors: PGError: server closed the connection unexpectedly. My best guess is that master resque process establishes connection to db (e.g. authlogic does that when use User.acts_as_authentic), while loading rails app classes, and that connection becomes corrupted in fork()ed process (on exit?), so next forked children get kind of broken global ActiveRecord::Base.connection I could reproduce very similar behaviour with this sample code imitating fork/processing in resque worker. (AFAIK, users of libpq recommended to recreate connections in forked process anyway, otherwise it's not safe ) But, the odd thing is that when I use pgbouncer or pgpool-II instead of direct pgsql connection, such errors do not appear. So, the question is where and how should I dig to find out why it is broken for plain connection and is working with connection pools? Or reasonable workaround?

    Read the article

  • OpenStreetMap and Hadoop

    - by portoalet
    Hi, I need some ideas for a weekend project about Hadoop and OpenStreetMap. I have access to AWS EC2 instance with OpenStreetMap snapshot in my EBS volume. The OpenStreetMap data is in a PostgreSQL database. What kind of MapReduce function can be run on the OpenStreetMap data, assuming I can export them into xml format, and then place into HDFS ? In other words, I am having a brain cramp at the moment, and cannot think what kind of MapReduce operation that can extract valuable insight from the OpenStreetMap xml? (i.e. extract all the places designated as park or golf course. But this needs to be done once only, not continuously) Many Thanks

    Read the article

  • How to set up an insert to a grails created file with next sequence number?

    - by Jack BeNimble
    I'm using a JMS queue to read from and insert data into a postgres table created by grails. The problem is obtaining the next sequence value. I thought I had found the solution with the following statement (by putting "DEFAULT" where the ID should go), but it's no longer working. I must have changed something, because I needed to recreate the table. What's the best way to get around this problem? ps = c.prepareStatement("INSERT INTO xml_test (id, version, xml_text) VALUES (DEFAULT, 0, ?)"); UPDATE: In response to the suggested solution, I did the following: Added this to the the domain: class XmlTest { String xmlText static constraints = { id generator:'sequence', params:[name:'xmltest_sequence'] } } And changed the insert statement to the following: ps = c.prepareStatement("INSERT INTO xml_test (id, version, xml_text) VALUES (nextval('xmltest_sequence'), 0, ?)"); However, when I run the statement, I get the following error: [java] 1 org.postgresql.util.PSQLException: ERROR: relation "xmltest_sequence" does not exist Any thoughts?

    Read the article

  • Handling database failover for Rails applications on FreeBSD

    - by bianster
    I'm working on implementing database (Postgresql) failover for a Rails app that runs with Passenger/FreeBSD. Due to certain constraints regarding the server OS, it's necessary to continue using FreeBSD (as opposed to say, Ubuntu). I'm finding it to be quite a challenge to have failover handled within the Rails application, by way of a customised database adapter due to the fact that this application will be load-balanced between several webservers, and the multiple Rails processes that Passenger spawned in each webserver. I previously looked at setting up Pacemaker/Corosync to manage database server failover on a common IP but unfortunately I wasn't able to get past building the packages on FreeBSD. It does work rather well on Ubuntu 10.04 but I'm not likely to be able to use Ubuntu due to the OS constraints. I'm considering a custom witness daemon that simply pings the primary DB server, and this witness daemon switches all the webservers to the standby DB server when the primary becomes uncontactable (permanently/temporarily), to avoid split-brain. Though I would really like to know if there is a way to get Pacemaker(or something similar) to do the switch on FreeBSD.

    Read the article

  • Problem building Postgis 1.5.x for Pg 8.4 on Ubuntu 9.10

    - by znik
    Here are things installed: $ sudo apt-get install postgresql-server-dev-8.4 libpq5 libpq-dev Here is a past to my config.out: http://pastebin.com/8Nk6pr96 And, here are some hints I got from IRC (names concealed) < foo> it's NOT failing to find libpq. < foo> libpq is present, but not compilable without adding a boatload of other -l flags < foo> and postgis' configure doesn't let you specify that via LIBS < foo> his paste contains the config.out, which shows this The configure dies with this, configure: error: could not find libpq I intend to install postgis for mapfish :)

    Read the article

  • Random Page Cost and Planning

    - by Dave Jarvis
    A query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the table's only index, rather effectively: CREATE UNIQUE INDEX measurement_001_stc_idx ON climate.measurement_001 USING btree (station_id, taken, category_id); Reducing the server's configuration value for random_page_cost from 2.0 to 1.1 had a massive performance improvement for the given range (nearly an order of magnitude) because it suggested to PostgreSQL that it should use the index. While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan: sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1997-12-31'::date AND How do I persuade PostgreSQL to use the indexes regardless of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query. Thank you! Query SELECT extract(YEAR FROM m.taken) AS year, avg(m.amount) AS amount FROM climate.city c, climate.station s, climate.station_category sc, climate.measurement m WHERE c.id = 5182 AND earth_distance( ll_to_earth(c.latitude_decimal,c.longitude_decimal), ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND s.elevation BETWEEN 0 AND 3000 AND s.applicable = TRUE AND sc.station_id = s.id AND sc.category_id = 1 AND sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1996-12-31'::date AND m.station_id = s.id AND m.taken BETWEEN sc.taken_start AND sc.taken_end AND m.category_id = sc.category_id GROUP BY extract(YEAR FROM m.taken) ORDER BY extract(YEAR FROM m.taken) 1900 to 1996: Index "Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" " -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" " -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" " -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" " -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" " Index Cond: (s.id = sc.station_id)" " Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" " -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" " Filter: (m.category_id = 1)" " -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" " Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" " -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" " Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" "Total runtime: 2269.264 ms" 1900 to 1997: Full Table Scan "Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" " -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" " Hash Cond: (m.station_id = sc.station_id)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" " -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" " Filter: (category_id = 1)" " -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" " Filter: (category_id = 1)" " -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" " -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" " Hash Cond: (s.id = sc.station_id)" " -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" " Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" " -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" " -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" " Recheck Cond: (category_id = 1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" " -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" " Index Cond: (category_id = 1)" "Total runtime: 86165.936 ms"

    Read the article

  • strange SqlAlchemy update behaviour

    - by Max
    I'm new to SqlAlchemy and Elixir, so I've started from tutorial and tried to create table, insert a record, and then update it as follows: #'elixir_test.py' from elixir import * metadata.bind = "postgresql://myuser:mypwd@localhost:5432/dbname" metadata.bind.echo = True class Movie(Entity): title = Field(Unicode(30)) year = Field(Integer) description = Field(UnicodeText) def __repr__(self): return '<Movie "%s" (%d)>' % (self.title, self.year) and in another file in the same directory: from elixir_test import * setup_all() #create table create_all() Movie(title=u"Blade Runner", year=1982) #add record session.commit() #get records Movie.query.all() #trying to update record and commit changes, BUT... movie = Movie.query.first() movie.year = 1983 session.commit() #now we have two records in our table, one #with year=1982 and one with year=1983 Movie.query.all() What did I missed?

    Read the article

  • Sorting tree with a materialized path?

    - by Ovid
    I have a tree structure in a table and it uses materialized paths to allow me to find children quickly. However, I also need to sort the results depth-first, as one would expect with threaded forum replies. id | parent_id | matpath | created ----+-----------+---------+---------------------------- 2 | 1 | 1 | 2010-05-08 15:18:37.987544 3 | 1 | 1 | 2010-05-08 17:38:14.125377 4 | 1 | 1 | 2010-05-08 17:38:57.26743 5 | 1 | 1 | 2010-05-08 17:43:28.211708 7 | 1 | 1 | 2010-05-08 18:18:11.849735 6 | 2 | 1.2 | 2010-05-08 17:50:43.288759 9 | 5 | 1.5 | 2010-05-09 14:02:43.818646 8 | 6 | 1.2.6 | 2010-05-09 14:01:17.632695 So the final results should actually be sorted like this: id | parent_id | matpath | created ----+-----------+---------+---------------------------- 2 | 1 | 1 | 2010-05-08 15:18:37.987544 6 | 2 | 1.2 | 2010-05-08 17:50:43.288759 8 | 6 | 1.2.6 | 2010-05-09 14:01:17.632695 3 | 1 | 1 | 2010-05-08 17:38:14.125377 4 | 1 | 1 | 2010-05-08 17:38:57.26743 5 | 1 | 1 | 2010-05-08 17:43:28.211708 9 | 5 | 1.5 | 2010-05-09 14:02:43.818646 7 | 1 | 1 | 2010-05-08 18:18:11.849735 How would I work that out? Can I do that in straight SQL (this is PostgreSQL 8.4) or should additional information be added to this table?

    Read the article

  • Why Banks or Financial Companies prefer Oracle than other RDBMS for their "Core" systems?

    - by edwin.nathaniel
    I'd like to know why most Banks or Financial companies prefer Oracle than other RDBMS for their core systems (the absolutely minimum features that a Bank must support). I found a few answers that didn't satisfy me. For example: Oracle has more features. But features for what? Can't you implement that in application level if you were not using Oracle? Could someone please describe a bit more technical but still on high-level overview of what the bank needs and how Oracle would solve it and the others can't or don't have the features yet? I came from the web-app (web 2.0) crowd who normally hear news about MySQL, PostgreSQL or even key-value/column-oriented storage solution. I have almost zero knowledge on how Banks or Financial companies operates from technical perspective. Thank you, Ed

    Read the article

  • PostGIS - can't create spatially-enabled database

    - by itgorilla
    I'm using Ubuntu 10.10, PostgreSQL 9.0 and PostGIS 1.5. I've installed PostGIS 1.5 from: https://launchpad.net/~ubuntugis/+archive/ubuntugis-unstable I used PPA first then the command: sudo apt-get install postgis to install postgis. I've been following these instructions to create a spatially-enabled database: http://ostgis.refractions.net/docs/ch02.html#id2630100 I got to the point where it's saying: Now load the PostGIS object and function definitions into your database by loading the postgis.sql definitions file (located in [prefix]/share/contrib as specified during the configuration step). psql -d [yourdatabase] -f postgis.sql Well, there is no postgis.sql on my server after the installation. I did an sudo updatedb to make sure I can find postgis.sql but it's not there. Any ideas? Thank you!

    Read the article

  • Popularity Algorithm - SQL / Django

    - by RadiantHex
    Hi folks, I've been looking into popularity algorithms used on sites such as Reddit, Digg and even Stackoverflow. Reddit algorithm: t = (time of entry post) - (Dec 8, 2005) x = upvotes - downvotes y = {1 if x > 0, 0 if x = 0, -1 if x < 0) z = {1 if x < 0, otherwise x} log(z) + (y * t)/45000 I have always performed simple ordering within SQL, I'm wondering how I should deal with such ordering. Should it be used to define a table, or could I build an SQL with the ordering within the formula (without hindering performance)? I am also wondering, if it is possible to use multiple ordering algorithms in different occasions, without incurring into performance problems. I'm using Django and PostgreSQL. Help would be much appreciated! ^^

    Read the article

  • Hibernate schema parameter doesn't work in @SequenceGenerator annotation

    - by tabdulin
    I hav the following code: @Entity @Table(name = "my_table", schema = "my_schema") @SequenceGenerator(name = "my_table_id_seq", sequenceName = "my_table_id_seq", schema = "my_schema") public class MyClass { @Id @GeneratedValue(generator = "my_table_id_seq", strategy = GenerationType.SEQUENCE) private int id; } Database: Postgresql 8.4, Hibernate annotations 3.5.0-Final. When saving the object of MyClass it generates the following SQL query: select nextval('my_table_id_seq') So there is no schema prefix and therefore the sequence cannot be found. When I write the sequenceName like sequenceName = "my_schema.my_table_id_seq" everything works. Do I have misunderstandings for meaning of schema parameter or is it a bug? Any ideas how to make schema parameter working?

    Read the article

  • 8 byte Integer with Doctrine and PHP

    - by Rufinus
    Hi, the players: 64bit linux with php 5 (ZendFramework 1.10.2) PostgreSQL 7.3 Doctrine 1.2 Via a Flash/Flex client i get an 8byte integer value. the field in the database is an BIGINT (8 byte) PHP_INT_SIZE show that system supports 8byte integer. printing out the value in the code as it is and as intval() leads to this: Plain: 1269452776100 intval: 1269452776099 float rounding failure ? but what really driving me nuts is ERROR: invalid input syntax for integer: "1269452776099.000000"' when i try to use it in a query. like: Doctrine_Core::getTable('table')->findBy('external_id',$external_id); or Doctrine_Core::getTable('table')->findBy('external_id',intval($external_id)); How i am supposed to handle this ? or how can i give doctrine a floating point number which it should use on a bigint field ? Any help is much appreciated! TIA

    Read the article

  • Fluentnhibernate and PostgerSQL, SchemaMetadataUpdater.QuoteTableAndColumns - System.NotSupportedExc

    - by Vyacheslav
    Hello! I'm using fluentnhibernate with PostgreSQL. Fluentnhibernate is last version. PosrgreSQL version is 8.4. My code for create ISessionFactory: public static ISessionFactory CreateSessionFactory() { string connectionString = ConfigurationManager.ConnectionStrings["PostgreConnectionString"].ConnectionString; IPersistenceConfigurer config = PostgreSQLConfiguration.PostgreSQL82.ConnectionString(connectionString); FluentConfiguration configuration = Fluently .Configure() .Database(config) .Mappings(m => m.FluentMappings.Add(typeof(ResourceMap)) .Add(typeof(TaskMap)) .Add(typeof(PluginMap))); var nhibConfig = configuration.BuildConfiguration(); SchemaMetadataUpdater.QuoteTableAndColumns(nhibConfig); return configuration.BuildSessionFactory(); } When I'm execute code at line SchemaMetadataUpdater.QuoteTableAndColumns(nhibConfig); throw error: System.NotSupportedException: Specified method is not supported. Help me, please! I'm very need for solution. Best regards

    Read the article

  • Specifying distinct sequence per table in Hibernate on subclasses

    - by gutch
    Is there a way to specify distinct sequences for each table in Hibernate, if the ID is defined on a mapped superclass? All entities in our application extend a superclass called DataObject like this: @MappedSuperclass public abstract class DataObject implements Serializable { @Id @GeneratedValue(strategy = GenerationType.SEQUENCE) @Column(name = "id") private int id; } @Entity @Table(name = "entity_a") public class EntityA extends DataObject { ... } @Entity @Table(name = "entity_b") public class EntityB extends DataObject { ... } This causes all entities to use a shared sequence, the default hibernate_sequence. What I would like to do is use a separate sequence for each entity, for example entity_a_sequence and entity_b_sequence in the example above. If the ID were specified on the subclasses then I could use the @SequenceGenerator annotation to specify a sequence for each entity, but in this case the ID is on the superclass. Given that ID is in the superclass, is there a way I can use a separate sequence for each entity — and if so, how? (We are using PostgreSQL 8.3, in case that's relevant)

    Read the article

  • Database: storing data from user registration form

    - by teggy
    Let's say I have an user registration form. In this form, I have the option for the user to upload a photo. I have an User table and Photo table. My User table has a "PathToPhoto" column. My question is how do I fill in the "PathToPhoto" column if the photo is uploaded and inserted into Photo table before the user is created? Another way to phrase my question is how to get the newly uploaded photo to be associated to the user that may or may not be created next. I'm using python and postgresql.

    Read the article

  • How to show unread subforums?

    - by bilygates
    I have written a simple forum in PHP using PostgreSQL. The forum consists of a number of subforums (or categories, if you like) that contain topics. I have a table that stores when was the last time a user visited a topic. It's something like this: user_id, topic_id, timestamp. I can easily determine what topics should be marked as unread by comparing the timestamp of the last topic reply with the timestamp of the last user visit. My question is: how do I efficiently determine what subforums (categories) should be marked as unread? All I've come up with is this: every time a user visits a topic, update the visit timestamp and check if all the topics from the current subforum are read or unread. If they are all read, mark the subforum as read for the user. Else, mark it as unread. But I think there must be another way. Thank you in advance.

    Read the article

  • Should I specify both INDEX and UNIQUE INDEX?

    - by Matt Huggins
    On one of my PostgreSQL tables, I have a set of two fields that will be defined as being unique in the table, but will also both be used together when selecting data. Given this, do I only need to define a UNIQUE INDEX, or should I specify an INDEX in addition to the UNIQUE INDEX? This? CREATE UNIQUE INDEX mytable_col1_col2_idx ON mytable (col1, col2); Or this? CREATE UNIQUE INDEX mytable_col1_col2_uidx ON mytable (col1, col2); CREATE INDEX mytable_col1_col2_idx ON mytable (col1, col2);

    Read the article

  • Compressing large text data before storing into db?

    - by Steel Plume
    Hello, I have application which retrieves many large log files from a system LAN. Currently I put all log files on Postgresql, the table has a column type TEXT and I don't plan any search on this text column because I use another external process which nightly retrieves all files and scans for sensitive pattern. So the column value could be also a BLOB or a CLOB, but now my question is the following, the database has already its compression system, but could I improve this compression manually like with common compressor utilities? And above all WHAT IF I manually pre-compress the large file and then I put as binary into the data table, is it unuseful as database system provides its internal compression?

    Read the article

  • How to keep historic details of modification in a database (Audit trail)?

    - by mada
    I'm a J2EE developer & we are using hibernate mapping with a PostgreSQL database. We have to keep track of any changes occurs in the database, in others words all previous & current values of any field should be saved. Each field can be any type (bytea, int, char...) With a simple table it is easy but we a graph of objects things are more difficult. So we have, speaking in a UML point of view, a graph of objects to store in the database with every changes & the user. Any idea or pattern how to do that?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >