Search Results

Search found 3179 results on 128 pages for 'merge replication'.

Page 29/128 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • What NAS setup for two-way syncing over the internet?

    - by Jamse
    I have family living a few hours away and have a lot of files that I would like to share - especially lots of folders of digital photos, but also documents etc. - partially so they can see them, partially so I can have access when I visit them and partially for backup / redundancy purposes. My current hard drives on my main machine are getting pretty full anyway, and I have a MythTV box where my music is currently stored, so I was thinking of getting a NAS anyway. And at the other end my family have a few computers, so they would probably benefit from a NAS too. My general idea (though I'm willing to shift on this if there are any bright ideas about other ways of achieving my objectives) is to get a matching pair of NASs and have them sync over the internet. (To cut down on bandwidth use I would get them in sync locally to start with.) Having read around as best I can it seems that syncing over the internet is generally only a feature on quite high end units. However, I have seen that QNAP seem to feature this on their TS-110 and TS-210 units, which might work (they call it "remote replication"). They seem pretty reasonably priced for what they are, but of course with buying 2 of them and then adding the drives (say 1TB or 2TB each) I'd be looking at about £400 total. So, I'm looking for recommendations really. I don't want to spend more than the QNAPs would cost me, but any other ideas would be most appreciated. I am comfortable with technology and tinkering around, but I don't have as much time for that as I would like, so I guess I would favour solutions that require less tinkering rather than more (even though that's less fun!). Any thoughts would be welcome, as would any comments from people who have used the QNAP boxes for this. Thanks in advance. Some specifications: Two-way syncing. Changes made at either end should be synced to the other. There shouldn't be one unit that is effectively a read-only mirror of the other. Not real time. The syncing doesn't need to be real time - if it updated, say, daily overnight that would be fine. Set and forget. I would prefer minimal user interaction once set up - it would be great if syncs were scheduled and automatic. OS independence. I am running Windows XP plus an Ubuntu-based MythTV box. At the other end there are Windows 7 and Windows XP machines, plus a networked TV set top box which I think can play files off the network. Machine independence. I would favour a system that is self-contained, i.e. not reliant on any particular PC being switched on. If the system had enough else going for it I could perhaps work around it at this end, where I only have one PC that's used as such, but it would be harder at the other where there are at least two PCs that might be accessing the files. Notifications. I guess things like getting an email notification if the syncing fell over for any reason would be useful, though it's not a deal breaker. Update I've been digging some more and it looks like QNAP's Remote Replication function is actually just Rsync, so only really suitable for one-way syncing. I've posted on their forum to double check, but I think that's the case. In which case, I think the focus of my question is now either: do any reasonably-priced NASs support bidirectional syncing over the internet?, or has anyone had any luck installing onto NASs for this purpose? (Also, updated question to clarify that I'm after two-way syncing.)

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • Snapshot agent obliterates conflicts

    - by mwolfe02
    We are using merge replication in SQL Server 2000. We have a snapshot agent that runs every night that updates the publication snapshot. About six months ago we updated from SQL Server 7.0 to 2000 (that's not a typo). We noticed a sharp decline in conflicts at that time but could not track down the reason. We finally found that the daily snapshot agent is recreating the conflict tables every night. This seems to be a change in functionality from SQL Server 7.0. We were running the snapshot agent before and the conflicts would accumulate. Is there some way to prevent the data in the conflict tables from being lost when the snapshot runs? Can anyone confirm a change in behavior between 7.0 and 2000? Our current plan is to simply stop automatically updating the publication snapshot. Is that a reasonable workaround? Here is the line from the script that is adding the snapshot: exec sp_addpublication_snapshot @publication = N'MyPub' , @frequency_type = 4 , @frequency_interval = 1 , @frequency_relative_interval = 1 , @frequency_recurrence_factor = 0 , @frequency_subday = 1 , @frequency_subday_interval = 5 , @active_start_date = 0 , @active_end_date = 0 , @active_start_time_of_day = 500 , @active_end_time_of_day = 235959 Here is the step that runs in the agent job: Step Name: Run agent. Type: Replication Snapshot Command: -Publisher [WCDBS02] -PublisherDB [TaxDB] -Distributor [WCDBS02] -Publication [TaxDB] -ReplicationType 2 -DistributorSecurityMode 1 This appears to be running the Replication Snapshot Agent Utility. There is no mention on that link about dropping and recreating system conflict tables, nor is there any flag that can be set to alter this behavior.

    Read the article

  • Fogbugz Duplicate Cases

    - by LeeHull
    I am using FogBugz free hosting to manage my project bugs, I also have several customers I create custom software for, been using FogBugz to keep everything organized. Question I have is, there are times where they send me an email with a bug, so I report it in my system and they create it as well, instead of having 2 cases of same bug, would like to merge or link them together, rather not just delete the duplicate. Is there a way to link them together, maybe like a cross reference or even merge them together?

    Read the article

  • create a dict of lists from a string

    - by Chris Card
    I want to convert a string such as 'a=b,a=c,a=d,b=e' into a dict of lists {'a': ['b', 'c', 'd'], 'b': ['e']} in Python 2.6. My current solution is this: def merge(d1, d2): for k, v in d2.items(): if k in d1: if type(d1[k]) != type(list()): d1[k] = list(d1[k]) d1[k].append(v) else: d1[k] = list(v) return d1 record = 'a=b,a=c,a=d,b=e' print reduce(merge, map(dict,[[x.split('=')] for x in record.split(',')])) which I'm sure is unnecessarily complicated. Any better solutions?

    Read the article

  • How do I copy a version of a single file from one git branch to another?

    - by madlep
    I've got two branches that are fully merged together. However, after the merge is done, I realise that one file has been messed up by the merge (someone else did an auto-format, gah), and it would just be easier to change to the new version in the other branch, and then re-insert my one line change after bringing it over into my branch. So what's the easiest way in git to do this?

    Read the article

  • WIX Merge Module : Trying to use $(var.Project.TargetFileName)

    - by Stephen Bailey
    I have created a simple Wix 3 Merge Module in VS 2005 ( .wxs ) <?xml version="1.0" encoding="UTF-8"?> <Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Module Id="TestMergeModule" Language="1033" Version="1.0.0.0"> <Package Id="ef2a568e-a8db-4213-a211-9261c26031aa" Manufacturer="Me" InstallerVersion="200" /> <Directory Id="TARGETDIR" Name="SourceDir"> <Directory Id="MergeRedirectFolder"> <Component Id="Test_ModuleComponent" Guid="{1081C5BC-106E-4b89-B14F-FFA71B0987E1}"> <File Id="Test" Name="$(var.Project.TargetFileName)" Source="$(var.Project.TargetPath)" DiskId="1" /> </Component> </Directory> </Directory> </Module> </Wix> And I have added the project "Project" as a reference to this Merge Module, however I continue to get this error Error 7 Undefined preprocessor variable '$(var.Project.TargetFileName)'. Any suggestions, I am sure I am just missing the obvious here.

    Read the article

  • Tail-recursive merge sort in OCaml

    - by CFP
    Hello world! I’m trying to implement a tail-recursive list-sorting function in OCaml, and I’ve come up with the following code: let tailrec_merge_sort l = let split l = let rec _split source left right = match source with | [] -> (left, right) | head :: tail -> _split tail right (head :: left) in _split l [] [] in let merge l1 l2 = let rec _merge l1 l2 result = match l1, l2 with | [], [] -> result | [], h :: t | h :: t, [] -> _merge [] t (h :: result) | h1 :: t1, h2 :: t2 -> if h1 < h2 then _merge t1 l2 (h1 :: result) else _merge l1 t2 (h2 :: result) in List.rev (_merge l1 l2 []) in let rec sort = function | [] -> [] | [a] -> [a] | list -> let left, right = split list in merge (sort left) (sort right) in sort l ;; Yet it seems that it is not actually tail-recursive, since I encounter a "Stack overflow during evaluation (looping recursion?)" error. Could you please help me spot the non tail-recursive call in this code? I've searched quite a lot, without finding it. Cout it be the let binding in the sort function? Thanks a lot, CFP.

    Read the article

  • Merge Word Documents (Office Interop & .NET), Keeping Formatting

    - by mbmccormick
    I'm having some difficulty merging multiple word documents together using Microsoft Office Interop Assemblies (Office 2007) and ASP.NET 3.5. I'm able to merge the documents, but some of my formatting is missing (namely the fonts and images). My current merge code is shown below. private void CombineDocuments() { object wdPageBreak = 7; object wdStory = 6; object oMissing = System.Reflection.Missing.Value; object oFalse = false; object oTrue = true; string fileDirectory = @"C:\documents\"; Microsoft.Office.Interop.Word.Application WordApp = new Microsoft.Office.Interop.Word.Application(); Microsoft.Office.Interop.Word.Document wDoc = WordApp.Documents.Add(ref oMissing, ref oMissing, ref oMissing, ref oMissing); string[] wordFiles = Directory.GetFiles(fileDirectory, "*.doc"); for (int i = 0; i < wordFiles.Length; i++) { string file = wordFiles[i]; wDoc.Application.Selection.Range.InsertFile(file, ref oMissing, ref oMissing, ref oMissing, ref oFalse); wDoc.Application.Selection.Range.InsertBreak(ref wdPageBreak); wDoc.Application.Selection.EndKey(ref wdStory, ref oMissing); } string combineDocName = Path.Combine(fileDirectory, "Merged Document.doc"); if (File.Exists(combineDocName)) File.Delete(combineDocName); object combineDocNameObj = combineDocName; wDoc.SaveAs(ref combineDocNameObj, ref m_WordDocumentType, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing); } I don't care necessarily how this is accomplished. It could output via PDF if it had to. I just want the formatting to carry over. Any help or hints that you could provide me with would be appreciated! Thanks!

    Read the article

  • Merge Word Documents (Office Interop & .NET), Keeping Formatting

    - by mbmccormick
    I'm having some difficulty merging multiple word documents together using Microsoft Office Interop Assemblies (Office 2007) and ASP.NET 3.5. I'm able to merge the documents, but some of my formatting is missing (namely the fonts and images). My current merge code is shown below. private void CombineDocuments() { object wdPageBreak = 7; object wdStory = 6; object oMissing = System.Reflection.Missing.Value; object oFalse = false; object oTrue = true; string fileDirectory = @"C:\documents\"; Microsoft.Office.Interop.Word.Application WordApp = new Microsoft.Office.Interop.Word.Application(); Microsoft.Office.Interop.Word.Document wDoc = WordApp.Documents.Add(ref oMissing, ref oMissing, ref oMissing, ref oMissing); string[] wordFiles = Directory.GetFiles(fileDirectory, "*.doc"); for (int i = 0; i < wordFiles.Length; i++) { string file = wordFiles[i]; wDoc.Application.Selection.Range.InsertFile(file, ref oMissing, ref oMissing, ref oMissing, ref oFalse); wDoc.Application.Selection.Range.InsertBreak(ref wdPageBreak); wDoc.Application.Selection.EndKey(ref wdStory, ref oMissing); } string combineDocName = Path.Combine(fileDirectory, "Merged Document.doc"); if (File.Exists(combineDocName)) File.Delete(combineDocName); object combineDocNameObj = combineDocName; wDoc.SaveAs(ref combineDocNameObj, ref m_WordDocumentType, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing); } I don't care necessarily how this is accomplished. It could output via PDF if it had to. I just want the formatting to carry over. Any help or hints that you could provide me with would be appreciated! Thanks!

    Read the article

  • Improving I/O performance in C++ programs[external merge sort]

    - by Ajay
    I am currently working on a project involving external merge-sort using replacement-selection and k-way merge. I have implemented the project in C++[runs on linux]. Its very simple and right now deals with only fixed sized records. For reading & writing I use (i/o)fstream classes. After executing the program for few iterations, I noticed that I/O read blocks for requests of size more than 4K(typical block size). Infact giving buffer sizes greater than 4K causes performance to decrease. The output operations does not seem to need buffering, linux seemed to take care of buffering output. So I issue a write(record) instead of maintaining special buffer of writes and then flushing them out at once using write(records[]). But the performance of the application does not seem to be great. How could I improve the performance? Should I maintain special I/O threads to take care of reading blocks or are there existing C++ classes providing this abstraction already?(Something like BufferedInputStream in java)

    Read the article

  • JPA / Hibernate checks conditions in merge()

    - by bert
    Working with JPA / Hibernate in an OSIV Web environment is driving me mad ;) Following scenario: I have an entity A that is loaded via JPA and has a collection of B entities. Those B entities have a required field. When the user adds a new B to A by pressing a link in the webapp, that required field is not set (since there is no sensible default value). Upon the next http request, the OSIV filter tries to merge the A entity, but this fails as Hibernate complains that the new B has a required field is not set. javax.persistence.PersistenceException: org.hibernate.PropertyValueException: not-null property references a null or transient value Reading the JPA spec, i see no sign that those checks are required in the merge phase (i have no transaction active) I can't keep the collection of B's outside of A and only add them to A when the user presses 'save' (aka entitymanager.persist()) as the place where the save button is does not know about the B's, only about A. Also A and B are only examples, i have similar stuff all over the place .. Any ideas? Do other JPA implementaions behave the same here? Thanks in advance.

    Read the article

  • Unreasonable errors in merge sort

    - by Alexxx
    i have the following errors - please help me to find the error: 9 IntelliSense: expected a '}' 70 4 it points on the end of the code - but there are no open { anywhere!! so why?? 8 IntelliSense: expected a ';' 57 1 it points on the { after the void main but why to put ; after the { of the void main?? Error 7 error C1075: end of file found before the left brace '{' at 70 1 points to the beginig of the code - why??? #include <stdio.h> #include <stdlib.h> void merge(int *a,int p,int q,int r) { int i=p,j=q+1,k=0; int* temp=(int*)calloc(r-p+1, sizeof(int)); while ((i<=q)&& (j<=r)) if(a[i]<a[j]) temp[k++]=a[i++]; else temp[k++]=a[j++]; while(j<=r) // if( i>q ) temp[k++]=a[j++]; while(i<=q) // j>r temp[k++]=a[i++]; for(i=p,k=0;i<=r;i++,k++) // copy temp[] to a[] a[i]=temp[k]; free(temp); } void merge_sort(int *a,int first, int last) { int middle; if(first < last) { middle=(first+last)/2; merge_sort(a,first,middle); merge_sort(a,middle+1,last); merge(a,first,middle,last); { } void main() { int a[] = {9, 7, 2, 3, 5, 4, 1, 8, 6, 10}; int i; merge_sort(a, 0, 9); for (i = 0; i < 10; i++) printf ("%d ", a[i]);

    Read the article

  • How to merge arraylist element ?

    - by tiendv
    I have a string example = " Can somebody provide an algorithm sample code in your reply ", after token and do some i want on string example i have arraylist like : ArrayList <token > arl = " "Can somebody provide ", "code in your ", "somebody provide an algorith", " in your reply" ) "Can somebody provide ", i know position start and end in string test : star = 1 end = 3 " code in your ", i know position stat = 7 end = 10, "somebody provide an algorith", i know position stat = 7 end = 10, "in your reply" i know position stat = 11 end = 14, we can see,some element in arl overlaping :"Can somebody provide "," code in your ","somebody provide an algorith". The problem here is how can i merge overlaping element to recived arraylist like ArrayList result ="" Can somebody provide an algorithm sample code","" in your reply""; Here my code : but it only merge fist elecment if check is overloaping public ArrayList<TextChunks> finalTextChunks(ArrayList<TextChunks> textchunkswithkeyword) { ArrayList<TextChunks > result = (ArrayList<TextChunks>) textchunkswithkeyword.clone(); //System.out.print(result.size()); int j; for(int i=0;i< result.size() ;i++) { int index = i; if(i+1>=result.size()){ break; } j=i+1; if(result.get(i).checkOverlapingTwoTextchunks(result.get(j))== true) { TextChunks temp = new TextChunks(); temp = handleOverlaping(textchunkswithkeyword.get(i),textchunkswithkeyword.get(j),resultSearchEngine); result.set(i, temp); result.remove(j); i = index; continue; } } return result; } Thanks in avadce

    Read the article

  • Why would this query cause a Merge Cartesian Join in Oracle

    - by decompiled
    I have a query that was recently required to be modified. Here's the original SELECT RTRIM (position) AS "POSITION", . // Other fields . . FROM schema.table x WHERE hours > 0 AND pay = 'RGW' AND NOT EXISTS( SELECT position FROM schema.table2 y where y.position = x.position ) Here's the new version SELECT RTRIM (position) AS "POSITION", . // Other fields . . FROM schema.table x WHERE hours > 0 AND pay = 'RGW' AND NOT EXISTS( SELECT position FROM schema.table2 y where y.date = get_fiscal_year_start_date (SYSDATE) AND y.position = x.position ) The UDF get_fiscal_year_start_date() returns the fiscal year start date of the date parameter. The first query runs fine, but the second creates a merge Cartesian join. I looked at the indexes on the tables and found that position and date were both indexed. My question for you stackoverflow is why would the addition of 'y.date = get_fiscal_year_start_date (SYSDATE)' cause a merge cartesian join in Oracle 10g.

    Read the article

  • How to install specific version of MySQL?

    - by user85569
    I installed from the repo, 5.0.77... including setup of PowerDNS (and the backend for MySQL). I tried setting up replication from my Master (which is MySQL 5.1.53) but it didn't work even though there were no errors, nothing got replicated. So the last resort is to try the same MySQL version on both the master and the slave (nb, only the slave has pdns installed) How would I go about installing MySQL 5.1.53? I tried downloading the rpm from MySQL (obviously the wrong one, didn't even include the mysql command to shell into the databases), but in turn fucked up the dependencies for pdns' mysql backend. I have the atomic repo which will install MySQL 5.5 (both on my Master server and Slave), but I don't want to do a major upgrade on the master right now as it's in production. Would love some advice!

    Read the article

  • MySQL - Why would SHOW SLAVE HOSTS cause a binlog dump?

    - by Rory McCann
    We're getting loads of binlog files in our MySQL 5.0.x. We have a normal master/slave replication thing going with 1 master, 1 slave. Looking at /var/log/mysql.log, nearly 90% of the time the replicator connects and does a SHOW SLAVE HOSTS causes a bin log dump. For example: 7020 Query SHOW SLAVE HOSTS 7020 Binlog Dump Log: 'mysql-bin.029634' Pos: 13273 However when I do a SHOW SLAVE HOSTS on the mysql myself, I get no results. Occasionally when the replicator does a SHOW SLAVE HOSTS, mysql will hang for hours. I see nothing in the /var/log/syslog at the same time... What's going on here? How can I debug this more? For the record the MySQL master and slave servers are ubuntu dapper.

    Read the article

  • Best way to replicate / mirror 100s of databases in SQL 2005

    - by mrwayne
    Hi, I currently host around 400-500 SQL 2005 databases of varying sizes (1-10 gig) each. I am aware of most of the different methods available and the general pros/cons of mirroring, log shipping, replication and clustering, but i am not aware of how well they tend to perform when its employed at the size i have specified (400-500 unique databases). Does anyone have any good advice on what is likely the best method for having the ability to fail over to another server with this sort of setup? Fail over does not need to be immediate, i'm just looking for something better than taking backups every day and moving them to storage. I'm preferably looking for something that would also makes it easy to manage the databases in bulk (as opposed to one at a time). Thanks for your input!

    Read the article

  • Ideas for scaling out database architecture

    - by andrew
    We're looking to scale out our existing database architecture and need some advice on which way to go. We currently have 2 web servers behind a load balancer that both read & write to a single master database which replicates to a slave. Ideally, I'd like each of the webservers to point to their own master DB and have the data between the 2 synchronised but from what I've read, using any kind of master-master or ring-replication is discouraged. I'm looking for a general "what do other people do" kind of answer - database vendor isn't a concern at the moment but we'd like to stay with MySQL or convert to MSSQL. Any ideas would be gratefully received. Many thanks, Andrew

    Read the article

  • sql server 2008 snapshot agent problem

    - by Dmitri
    Hi! I've just installed mssql 2008 sp1 x64 on windows 7 RTM, and have the problem with creating snapshots, whenever I try to launch the snapshot agent (i.e. to setup transactional replication publication), it throws the error that 'the file is missing'. I have looked into c:\program files\microsoft sql server\100\com and there are no executable files at all, like snapshot.exe! I tried a crazy move to copy all the files from my mssql 2005 com folder, without replacing ofcourse, and now it doesn't give an error, but says 'starting' all the time, but nothing happens. (Now I have removed those files again) I have all of the relevant features installed. So please help me figure out what to do now! Thanks! Dmitri.

    Read the article

  • Suggest me solution to track the change in test DB and replicate in Another DB

    - by Pranav
    Suggest me solution to track the change in test DB and replicate in Another DB... My Client need a script or any solution, if he has two Database, One Test DB in which he tests his data on test portal and if he find it appropriate he can use those changes to be done in main DB to display on Live site.. Fior this he needs the solution to record or track all updation/deletion/insertion, so that he can do the same in main DB if found appropriate, NOTE: we have only one server, no separate server, hence binary log replication doesnt seems to be working for my case..

    Read the article

  • Can I run mysqld on top of glusterfs?

    - by Richard Holloway
    I have been playing with glusterfs recently. What I want to try is to run mysqld on top of the glusterfs in a similar way as it is possible to run MySQL on top of DRBD. I am familiar with MySQL replication and the advantages of using that instead of this approach and I am also aware of MongoDB and other NoSQL solutions. However, it would be an easy solution to a few specific projects I have coming up if I could leave MySQL as it is and replicate the underlying file system. Is this possible and if it is where can I find out how?

    Read the article

  • Migrating master-slave MySQL database servers to 2 new servers, any tips or suggestions?

    - by mmattax
    I'm setting up 2 new database servers that will be replacing a current master-slave setup. All boxes are running / will be running MySQL on RHEL. Our current naming conventions: db1 - master database db2 - slave (using MySQL replication) db01 - new master db02 - new slave We need to get db01 to be the new master with db02 as the new slave. What is the best way to migrate db1 and db2 to db01 and db02? db1 and db2 are running in a production setting and we need to minimize all downtime; db1 has roughly 30GB of data in the database. Any suggestions or tips on how to migrate to our new servers would be much appreciated.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >