Search Results

Search found 1397 results on 56 pages for 'transactional replication'.

Page 6/56 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • charsets in MySQL replication

    - by niklassaers
    Hi guys, What can I do to ensure that replication will use latin1 instead of utf-8? I'm migrating between an MySQL 5.1.22 server (master) on a Linux system and a MySQL 5.1.42 server (slave) on a FreeBSD system. My replication works well, but when non-ascii characters are in my varchars, they turn "weird". The Linux/MySQL-5.1.22 shows the following character set variables: character_set_client=latin1 character_set_connection=latin1 character_set_database=latin1 character_set_filesystem=binary character_set_results=latin1 character_set_server=latin1 character_set_system=utf8 character_sets_dir=/usr/share/mysql/charsets/ collation_connection=latin1_swedish_ci collation_database=latin1_swedish_ci collation_server=latin1_swedish_ci While the FreeBSD shows character_set_client=utf8 character_set_connection=utf8 character_set_database=utf8 character_set_filesystem=binary character_set_results=utf8 character_set_server=utf8 character_set_system=utf8 character_sets_dir=/usr/local/share/mysql/charsets/ collation_connection=utf8_general_ci collation_database=utf8_general_ci collation_server=utf8_general_ci Setting any of these variables from the MySQL CLI has no effect, and setting them in my.cnf or at the command line makes the server not start. Of course, both servers have the tables in question created the same way, in this case with DEFAULT CHARSET=latin1. Let me give you an example: CREATE TABLE `test` ( `test` varchar(5) DEFAULT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 When I on the master do, in a Latin1 terminal, "INSERT INTO test VALUES ('æøå')", this becomes on the slave, when I select it from a Latin1 based terminal +--------+ | test | +--------+ | æøå | +--------+ On a UTF-8 based terminal on the replication slave, test contains: +--------+ | test | +--------+ | æøå | +--------+ So my conclusion is that it is converted to utf8, even though the table definition is latin1. Is this a correct conclusion? Of course, on the master, in a latin1 terminal, it still says: +------+ | test | +------+ | æøå | +------+ Since both system character sets are utf-8, if I set both terminals to utf-8 and do again "INSERT INTO test VALUES ('æøå')" on the master with a utf-8 terminal, on the slave with utf-8 I get: +------------+ | test | +------------+ | æøà | +------------+ If my conclusion is correct, all my replicated data is converted to utf8 (if it is utf8, it is treated as latin1 and converted to utf8), while all the old data in the table is, as the CREATE TABLE suggests, latin1. I'd love to convert it all to utf-8 if it weren't for the fact that legacy applications rely on it being latin1, so I need to keep it in latin1 while they still exist. What can I do to ensure that the replication reads latin1, treats it as latin1 and writes it on the slave as latin1? Cheers Nik

    Read the article

  • Mongodb Slave replication lag

    - by Leonid Bugaev
    We using standard mongo setup: 2 replicas + 1 arbiter. Both replica servers use same AWS m1.medium with RAID10 EBS. We experiencing constantly growing replication lag on secondary replica. I tried to do full-resync, you can see it on graph, but it helped only for some hours. Our mongo usage is really low now, and frankly i can't understan why it can be. iostat 1 for secondary: avg-cpu: %user %nice %system %iowait %steal %idle 80.39 0.00 2.94 0.00 16.67 0.00 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn xvdap1 0.00 0.00 0.00 0 0 xvdb 0.00 0.00 0.00 0 0 xvdfp4 12.75 0.00 189.22 0 193 xvdfp3 12.75 0.00 189.22 0 193 xvdfp2 7.84 0.00 40.20 0 41 xvdfp1 7.84 0.00 40.20 0 41 md127 19.61 0.00 219.61 0 224 mongostat for secondary (why 100% locks? i guess its the problem): insert query update delete getmore command flushes mapped vsize res faults locked % idx miss % qr|qw ar|aw netIn netOut conn set repl time *10 *0 *16 *0 0 2|4 0 30.9g 62.4g 1.65g 0 107 0 0|0 0|0 198b 1k 16 replset-01 SEC 06:55:37 *4 *0 *8 *0 0 12|0 0 30.9g 62.4g 1.65g 0 91.7 0 0|0 0|0 837b 5k 16 replset-01 SEC 06:55:38 *4 *0 *7 *0 0 3|0 0 30.9g 62.4g 1.64g 0 110 0 0|0 0|0 342b 1k 16 replset-01 SEC 06:55:39 *4 *0 *8 *0 0 1|0 0 30.9g 62.4g 1.64g 0 82.9 0 0|0 0|0 62b 1k 16 replset-01 SEC 06:55:40 *3 *0 *7 *0 0 5|0 0 30.9g 62.4g 1.6g 0 75.2 0 0|0 0|0 466b 2k 16 replset-01 SEC 06:55:41 *4 *0 *7 *0 0 1|0 0 30.9g 62.4g 1.64g 0 138 0 0|0 0|1 62b 1k 16 replset-01 SEC 06:55:42 *7 *0 *15 *0 0 3|0 0 30.9g 62.4g 1.64g 0 95.4 0 0|0 0|0 342b 1k 16 replset-01 SEC 06:55:43 *7 *0 *14 *0 0 1|0 0 30.9g 62.4g 1.64g 0 98 0 0|0 0|0 62b 1k 16 replset-01 SEC 06:55:44 *8 *0 *17 *0 0 3|0 0 30.9g 62.4g 1.64g 0 96.3 0 0|0 0|0 342b 1k 16 replset-01 SEC 06:55:45 *7 *0 *14 *0 0 3|0 0 30.9g 62.4g 1.64g 0 96.1 0 0|0 0|0 186b 2k 16 replset-01 SEC 06:55:46 mongostat for primary insert query update delete getmore command flushes mapped vsize res faults locked % idx miss % qr|qw ar|aw netIn netOut conn set repl time 12 30 20 0 0 3 0 30.9g 62.6g 641m 0 0.9 0 0|0 0|0 212k 619k 48 replset-01 M 06:56:41 5 17 10 0 0 2 0 30.9g 62.6g 641m 0 0.5 0 0|0 0|0 159k 429k 48 replset-01 M 06:56:42 9 22 16 0 0 3 0 30.9g 62.6g 642m 0 0.7 0 0|0 0|0 158k 276k 48 replset-01 M 06:56:43 6 18 12 0 0 2 0 30.9g 62.6g 640m 0 0.7 0 0|0 0|0 93k 231k 48 replset-01 M 06:56:44 6 12 8 0 0 3 0 30.9g 62.6g 640m 0 0.3 0 0|0 0|0 80k 125k 48 replset-01 M 06:56:45 8 21 14 0 0 9 0 30.9g 62.6g 641m 0 0.6 0 0|0 0|0 118k 419k 48 replset-01 M 06:56:46 10 34 20 0 0 6 0 30.9g 62.6g 640m 0 1.3 0 0|0 0|0 164k 527k 48 replset-01 M 06:56:47 6 21 13 0 0 2 0 30.9g 62.6g 641m 0 0.7 0 0|0 0|0 111k 477k 48 replset-01 M 06:56:48 8 21 15 0 0 2 0 30.9g 62.6g 641m 0 0.7 0 0|0 0|0 204k 336k 48 replset-01 M 06:56:49 4 12 8 0 0 8 0 30.9g 62.6g 641m 0 0.5 0 0|0 0|0 156k 530k 48 replset-01 M 06:56:50 Mongo version: 2.0.6

    Read the article

  • How do you implement Software Transactional Memory?

    - by Joseph Garvin
    In terms of actual low level atomic instructions and memory fences (I assume they're used), how do you implement STM? The part that's mysterious to me is that given some arbitrary chunk of code, you need a way to go back afterward and determine if the values used in each step were valid. How do you do that, and how do you do it efficiently? This would also seem to suggest that just like any other 'locking' solution you want to keep your critical sections as small as possible (to decrease the probability of a conflict), am I right? Also, can STM simply detect "another thread entered this area while the computation was executing, therefore the computation is invalid" or can it actually detect whether clobbered values were used (and thus by luck sometimes two threads may execute the same critical section simultaneously without need for rollback)?

    Read the article

  • sql server replication algorithm.

    - by reggie
    Anyone know how the underlying replication model in sql server works? Do they essentially depend on UTC datetime values to determine if something is new or do they keep a table of all the changes (like a table of tableID+rowid that have changed). I am building my own "replication" system and was planning on using the dates to know what to replicate. Then I started wondering what would happen if the date got off in the computer for some reason. The obvious choice is to keep a log of the changes as you go and once you replicate those changes, you remove from the log of changes. But thats a lot of extra work, instead of just checking dates. I figure if sql server replication works by just checking the dates, then that should be good enough for me. Any wisdom here? thanks

    Read the article

  • Designing & Maintaining SQL Server Transactional Replication Environments

    Microsoft IT protects against unplanned Transactional Replication outages and issues by using best practices and proactive monitoring. This results in increased stability, simplified management and improved performance of transactional replication environments. New! SQL Backup Pro 7.2 - easy, automated backup and restoresTry out the latest features and get faster, smaller, verified backups. Download a free trial.

    Read the article

  • Steps to Rename a Subscriber Database for SQL Server Transactional Replication

    I have transactional replication configured in production. The business team has a requirement to rename the subscription database. Is it possible to rename the subscription database and ensure that transactional replication will continue to function as before? If so, how could we achieve this? Get smart with SQL Backup ProPowerful centralised management, encryption and more.SQL Backup Pro was the smartest kid at school. Discover why.

    Read the article

  • Why is it possible to save entity but not delete if transactional annotation is set to readonly=true

    - by jakob
    Hello experts! My class is annotated with org.springframework.transaction.annotation.Transactional like this: @Transactional(readOnly = true) public class MyClass { I then have a dao class: @Override public void delete(final E entity) { getSession().delete(entity); } @Override public void save(final E entity) { getSession().saveOrUpdate(entity); } Then I have two methods in MyClass @Transactional(readOnly = false) public void doDelete(Entity entity){ daoImpl.delete(entity) } //@Transactional(readOnly = false) public void doSave(){ daoImpl.save(entity) } Saving and deleting works like a charm. But if I remove the @Transactional(readOnly = false) on doDelete method deletion stops working, Saving works with and without the method annotation. So my question is: WHY?

    Read the article

  • lsyncd + csync2 : cluster of 3 or more nodes

    - by sbrattla
    I've got 3 (and potentially more) web servers hosting the same content (fronted by a load balancer). Thus, I need to make sure that files on these web servers are the same. It appears that csync2 in combination with lsyncd is able to do synchronize a cluster of nodes, but according to this article there's a problem with cyclic events in such a setup. In other words, the author writes that a file change on one machine would trigger a replication event to other machines, which again would trigger a replication event back to the original machine. It appears that this is a consequence of the setup which uses lsyncd (and inotify) to catch file modification events and from there trigger csync2 to replicate the file tree. Does anyone have experience with lsyncd in combination with csync2. Have you had trouble with cyclic events?

    Read the article

  • Replicate Oracle to MySQL

    - by Rosdi
    I am developing a web apps, this web application will be using MySQL. Now I need to replicate my client's Oracle database into MySQL, only a few tables will be involved.. a table can be up to 2-3 million rows. I only have SELECT privilege on this Oracle, so don't ask me to install any kind of service on the Oracle machine. I have complete control on the MySQL side however. The replication is only one way (Oracle to MySQL). I can write a simple script to truncate MySQL table and repopulate it every night but I think this is very inefficient, there must be a better way. Is there any free tools I can use? Expensive database replication system is definitely out of the question.

    Read the article

  • GoldenGate 12c - MySQL Active-Active Replication Setup

    - by Jinyu Wang-Oracle
    Active-active  (also called Master-Master or Bi-Directional) replication captures data changes from two or more systems and replicat the changes to synchronize the data.  Active-Active replication is often needed for high availability, load balancing and scaling out purposes.   Oracle GoldenGate is known to be one of the first and the best replication tool handling active-active replications. As of Oracle GoldenGate 12c, it provides (Refer to Oracle GoldenGate 12.1.2 Documentation - Configuring Oracle GoldenGate for Active-Active High Availability for more information) the followings: Robust loop-back prevention Comprehensive conflict resolution and detection support Heterogeneous support across different database versions and operation systems.  Oracle GoldenGate supports active-active configurations for DB2 on z/OS, LUW, and IBM i, MySQL, Oracle, SQL/MX,SQL Server, Sybase, and Teradata. However, the setup is different from database to database. In this example, I will show you how to setup an active-active data replication between two MySQL database instances. The example setup below is to have active-active replication between MySQL 5.5 and MySQL 5.6 instances and is shown as follows: MySQL 5.5 (Manager Port: 15105)  Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3305 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, PASSWORD mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl REPORTROLLOVER AT 05:30 ON saturday TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15106, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3305 targetdb test, userid root, password mysql sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge REPERROR (DEFAULT, ABEND) REPERROR(1062, IGNORE) map test.TCUSTMER, target test.TCUSTMER,colmap(usedefaults, region_code="region code"); map test.TCUSTORD, target test.TCUSTORD; MySQL 5.6 (Manager Port: 15106) Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3306 targetdb test, userid root, password mysql --assumetargetdefs sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge map test.TCUSTMER, target test.TCUSTMER, colmap(usedefaults, "region code"=region_code); map test.TCUSTORD, target test.TCUSTORD; Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3306 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, USERID mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/usr/local/mysql56/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15105, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; The setup parameters are quite self-explanatory. The key setup is to avoid the replication data  looping. Oracle GoldenGate for MySQL uses the information in the replication checkpoint table to identify the transaction applied by replicats and thus avoid extracting those transactions by Oracle GoldenGate extracts. The example setup in the extract in MySQL 5.5 instance is shown as follows.  TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl Setting up an active-active replication is often more complicated than this and requires the following additional considerations. I would elaborate on this in the follow-up discussions. 

    Read the article

  • Sqlserver 2005 Replication issue

    - by Francesco
    Hi, I have a peer to peer Merge Replication from two Sqlserver 2005. The first Sqlserver is both the publisher and distribuitor. All works fine but if the VPN goes down for a couple of hours the replication goes down too and I need to manually restart the sqlserver agent. In the Sqlserver Agent properties I have set the two option about the agent failover but nothing. How I can set an automatic restart of sqlserver agent when the VPN goes down? Thanks

    Read the article

  • Development Environment for Testing MySQL Replication

    - by Dave Morris
    Is there an easy way to setup an environment on one machine (or a VM) with MySQL replication? I would like to put together a proof of concept of MySQL replication with one Master write instance and two slave instances for reads. I can see doing it across 2 or 3 VMs running on my computer, but that would really bog down my system. I'd rather have everything running on the same VM. What's the best way to proof out scalability solutions like this in a local dev environment? Thanks for your help, Dave

    Read the article

  • Cassandra replication system - how it works

    - by inquisitor
    Does cassandra replicates only on write procedure (with choosen consistency level)? Is there any auto-replicate option for absent nodes, if i want symetric data in every node? If I plug in new node to cluster there is no auto replication - how to sync data from others nodes with new one? If I want replication like multimaster (2 nodes) with slave backup (1 node) known from MySQL, what is the proper logic setup and manage that on cassandra (3 nodes)? How about two nodes?

    Read the article

  • Switching MySql Replication Format

    - by NNN
    I'm currently using statement-based replication. After upgrading to MySql 5.1, I'm considering using row-based replication. After reading the docs it seems that you can change the format of the master on the fly. Will the slave automatically adapt to whatever type of binary log it is sent? Do I have to make any changes to the slave or master to get ready for switching or can I simply modify the binlog_format variable on the master?

    Read the article

  • Echange Server 2003 Replication

    - by Campo
    We have 2 Exchange 2003 Servers. 1 is the master and is currently hosting all the mailboxes and public stores. I would like to setup a standby exchange server that is replicating all the mailboxes and public stores from the primary exchange server. This way if the primary goes down we can default to the standby. I have searched all over for some guidance but have not been able to find anything detailing this for Exchange 2003. Any help is much appreciated.

    Read the article

  • database replication for new user signup

    - by Jeff Storey
    I have a database that stores the users of my application. When a new user signs up, a record is inserted into the database for that user. I have a replicated version (slave) of this database (using mysql for now). What I'm concerned about is this scenario: step 1: user signs up and user record is inserted into the database step 2: user then tries to login, and the login process queries the database for the user. however, this query hits the slave database, but the user record has not yet been replicated in the slave and it returns an error that the user does not exist. This is a pretty trivial example, but I can see how it can apply to a lot of cases. Is there a strategy for configuring replicated databases to help prevent this situation?

    Read the article

  • ZFS replication between 2 ZFS file systems

    - by XO01
    I initially replicated tank/storage1 -- usb1/storage1-slave (depicted below), and then (deliberately) destroyed the snapshot I replicated from. By doing this, did I lose the ability to incrementally (zfs send -i) replicate between these 2 file systems? What's the best way to approach SYNC'ing these file systems after destroying this snapshot? # zfs list NAME USED AVAIL REFER MOUNTPOINT tank 128G 100G 23K /tank tank/storage1 128G 100G 128G /tank/storage1 usb1 122G 563G 24K /usb1 usb1/storage1-slave 122G 563G 122G /usb1/storage1-slave usb1/storage2 21K 563G 21K /usb1/storage2 What if I initially RSYNC'd tank/storage1 -- usb1/storage1-slave, and decided to incrementally replicate 'via zfs send -i'.

    Read the article

  • MySQL slave replication Seconds behind master increasing?

    - by geekmenot
    I started a MySQL slave using innobackupexand the Read_Master_Log_Pos: and the Relay_Log_Pos: are updating however the seconds behind master keeps on increasing (it's at Seconds_Behind_Master:496637 currently and increasing). Any ideas on how to fix this? mysql> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 10.8.25.111 Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.005021 Read_Master_Log_Pos: 279162266 Relay_Log_File: mysql-relay-bin.000004 Relay_Log_Pos: 378939436 Relay_Master_Log_File: mysql-bin.004997 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 378939290 Relay_Log_Space: 26048998487 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 497714 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 1 row in set (0.00 sec)

    Read the article

  • Replication from OSX OpenDirectory to OpenLDAP

    - by natacado
    I have an OpenDirectory server running on an OSX Server machine, and I'd like to increase the reliability of the service by having a slave server. The problem is, I only have 1 OSX Server but I have plenty of Linux servers available. I'm happy with Apple's tools that integrate with OpenDirectory, but given Apple's recent discontinuation of the XServe, I'm not particularly interested in continuing with Apple hardware. I remember hearing that OpenDirectory is (now-distantly) based off the OpenLDAP code base; is there any way I could replicate from OpenDirectory to OpenLDAP instead of having to purchase another OSX server?

    Read the article

  • Using TrueCrypt with File Replication on Windows Server

    - by neildeadman
    We have a few folders that are set to replicate to a DR file server off-site. One of these folders contains a file that is a TrueCrypt volume container. When this file is mounted in TrueCrypt, the file won't replicate (fair enough!). I'm looking at alternatives to improve this situation. One solution I currently have is to have a scheduled task to unmount the volume and then every morning as the volume is needed, have someone mount it. This is a pain slightly because the password is known by a few people (I'm not one and neither are my colleagues who would be performing the mounting operation) so we'd need to continually get them to come over and type it in. The other I had was to have one TrueCrypt container on each server and replicate the contents when they are mounted. I wasn't able to get TrueCrypt to see the mounted volume so I guess this is a no go. Any other solutions I have missed or a fix for the above?

    Read the article

  • Mysql replication, Slow resyncing of slave after an error

    - by James Hackett
    I have a slave that got an error about a months or so ago and got way behind the master. I fixed the error and now playing catchup with the master but its going very slowly. Its going at 1.3x real time. I was using less that 10% of the db resources when these writes were first happening so the speed of the server shouldn't be an issue. Is there any settings I can switch to help the slave catch up with the master?

    Read the article

  • LVM, Soft RAID1, and Replication?

    - by mtkoan
    Hi all, I am practicing putting together a HA file server. It is a linux server with 2 1.5TB Hard drives. My plan is to use LVM to manage the physical volumes into logical volumes for /, /home, and /var. Then use md (soft RAID 1) to mirror the image onto the second HDD, THEN use DRDB to mirror the entire setup another server. Is this overkill? Would I just be okay with just md and DRDB? The system will serve user's homedirs (~100) and probably some groupware or other local intranet. On my own machines I've always separated root and /home partitions in case I break something, I can easily reinstall the OS. Should I follow that same theory here? If so I need LVM, because I really can't predict where we'll need more space, /var or /home.

    Read the article

  • Data replication between two web nodes

    - by HTF
    I have Wordpress installation running on two web servers (Nginx). There is unidirectional synchronization from server A to server B and I'm using lsyncd for this purpose. with his configuration I have to add blog posts from the first web server so the data is replicated to the second one - how I can force access to Wordpress back-end only from the first web server? Please note that both servers have the same domain for Wordpress. Regards

    Read the article

  • Replicating A Volume Of Large Data via Transactional Replication

    During weekend maintenance, members of the support team executed an UPDATE statement against the database on the OLTP Server. This database was a part of Transactional Replication, and once the UPDATE statement was executed the Replication procedure came to a halt with an error message. Satnam Singh decided to work on this case and try to find an efficient solution to rebuild the procedure without significant downtime.

    Read the article

  • Selective replication with CouchDB

    - by FRotthowe
    I'm currently evaluating possible solutions to the follwing problem: A set of data entries must be synchonized between multiple clients, where each client may only view (or even know about the existence of) a subset of the data. Each client "owns" some of the elements, and the decision who else can read or modify those elements may only be made by the owner. To complicate this situation even more, each element (and each element revision) must have an unique identifier that is equal for all clients. While the latter sounds like a perfect task for CouchDB (and a document based data model would fit my needs perfectly), I'm not sure if the authentication/authorization subsystem of CouchDB can handle these requirements: While it should be possible to restict write access using validation functions, there doesn't seem to be a way to authorize read access. All solutions I've found for this problem propose to route all CouchDB requests through a proxy (or an application layer) that handles authorization. So, the question is: Is it possible to implement an authorization layer that filters requests to the database so that access is granted only to documents that the requesting client has read access to and still use the replication mechanism of CouchDB? Simplified, this would be some kind of "selective replication" where only some of the documents, and not the whole database is replicated. I would also be thankful for directions to some detailed information about how replication works. The CouchDB wiki and even the "Definite Guide" Book are not too specific about that.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >