Search Results

Search found 3179 results on 128 pages for 'merge replication'.

Page 103/128 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Better Version Control (Distributed) - Minimum impact on sources - always possible to update

    - by Olav
    I am f...fed up with Subversion. Need a version control that: Can be used without affecting the sources with embedded files (like the Subversion .svn-directories), or having to check in and then check out (If you want to version control live web-site files for example). It should always be possible to bring the repository quickly up to date whatever I have done (Without resolving conflicts or adding files first etc.) Ideally it should be possible to merge repositories starting out as separate. I thing it should be a distributed one, I think GIT is the Lingua Franca, but there is also Mercurial and Bazaar, which should have some advantages since they exist :-)

    Read the article

  • Violation of primary key constraint, multiple users

    - by MC.
    Lets say UserA and UserB both have an application open and are working with the same type of data. UserA inserts a record into the table with value 10 (PrimaryKey='A'), UserB does not currently see the value UserA entered and attempts to insert a new value of 20 (PrimaryKey='A'). What I wanted in this situation was a DBConcurrencyException, but instead what I have is a primary key violation. I understand why, but I have no idea how to resolve this. What is a good practice to deal with such a circumstance? I do not want to merge before updating the database because I want an error to inform the user that multiple users updated this data.

    Read the article

  • Combining 2xMSI and 1xEXE into a single installer

    - by Netguy
    Hi Guys , I have 3 files - Alky for Applications.msi ( which make Vista Apps work on XP) Windows Vista sidebar.exe ( Which make that VIsta sidebar work on XP) Gadget Extractor.msi (A part of number 2) Now, the problem is that all the 3 applications are installers and I want to merge them to 1 installer. So please tell me what should I do and I also want to remove some content (normal files) from 2. Note: I do NOT want to bind the files, so that 3 installers start at the same time. I want to make them into one The Person who is able to help me gets a VPS with cPanel with RL/TF allowed :D

    Read the article

  • Continuous integration with multiple branch development

    - by ryanprayogo
    In the project that I'm working on, we are using SVN with 'Stable Trunk' strategy. What that means is that for each bug that is found, QA opens a bug ticket and assigns it to a developer. Then, a developer fixes that bug and checks it in a branch (off trunk, let's call this the bug branch) and that branch will only contain fixes for that particular bug ticket When we decided to do a release, for each bug fixes that we want to release to the customer, a developer will merge all the fixes from several bug branch to trunk and proceed with the normal QA cycle. The problem is that we use trunk as the codebase for our CI job (Hudson, specifically), and therefore, for all commits to the bug branch, it will miss the daily build until it gets merged to trunk when we decided to release the new version of the software. Obviously, that defeats the purpose of having CI. What is the proper way to fix this issue?

    Read the article

  • Can I compose a WCF callback contract out of multiple interfaces?

    - by mafutrct
    Followup question to http://stackoverflow.com/questions/2502930/how-can-i-compose-a-wcf-contract-out-of-multiple-interfaces. I tried to merge multiple callback interfaces in a single interface. This yields an InvalidOperationException claiming that the final interface contains no operations. Technically, this is true, however, the inherited interfaces do contain operations. How can I fix this? Or is this a limitation of WCF? Edit: interface A { [OperationContract]void X(); } interface B { [OperationContract]void Y(); } interface C: A, B {} // this is the public callback contract

    Read the article

  • Single SQL Server Result Set from Query

    - by JamesC
    Hi Please advise on how to merge two results in to one using SQL Server 2005. I have the situation where an Account can have up to two Settlement Instructions and this have been modeled like so: The slim-ed down schema: Account --------------------- Id AccountName PrimarySettlementId (nullable) AlternateSettlementId (nullable) SettlementInstruction ---------------------- Id Name The output I want is a single result set with a select statement something along the lines of this which will allow me to construct some java objects in my Spring row mapper: select Account.Id as accountId, Account.AccountName as accountName, s1.Id as primarySettlementId, s1.Name as primarySettlementName, s2.Id as alternateSettlementId, s2.Name as alternateSettlementName I've tried various things but cannot find a way to get the result set merged in to one where the primary and alternate FK's are not null. Finally I have searched the forum, but nothing quite seems to fit with what I need.

    Read the article

  • PHP: How to overwrite values in one array with values from another

    - by Svish
    I have an array with default settings, and one array with user-specified settings. I want to merge these two arrays so that the default settings gets overwritten with the user-specified ones. I have tried to use array_merge, which does the overwriting like I want, but it also adds new settings if the user has specified settings that doesn't exist in the default ones. Is there a better function I can use for this than array_merge? Or is there a function I can use to filter the user-specified array so that it only contains keys that also exist in the default settings array? (PHP version 5.3.0)

    Read the article

  • Would this rollback/stop all records from inserting?

    - by chobo2
    Hi I been going through this tutorial http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx and them make a SP like this CREATE PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc Now my requirements require me to mass insert and mass update one after another. So first I am wondering can I merge those into one SP? I am not sure how it works with this OPENXML but I would think it would just be making sure that the XPath is right. Next what happens while it would be running this combined SP and something goes wrong. Would it roll back all the records or just stop and the records that happened before this event that crashed it would be inserted?

    Read the article

  • Why is Postgres doing a Hash in this query?

    - by Claudiu
    I have two tables: A and P. I want to get information out of all rows in A whose id is in a temporary table I created, tmp_ids. However, there is additional information about A in the P table, foo, and I want to get this info as well. I have the following query: SELECT A.H_id AS hid, A.id AS aid, P.foo, A.pos, A.size FROM tmp_ids, P, A WHERE tmp_ids.id = A.H_id AND P.id = A.P_id I noticed it going slowly, and when I asked Postgres to explain, I noticed that it combines tmp_ids with an index on A I created for H_id with a nested loop. However, it hashes all of P before doing a Hash join with the result of the first merge. P is quite large and I think this is what's taking all the time. Why would it create a hash there? P.id is P's primary key, and A.P_id has an index of its own.

    Read the article

  • Oracle 11g Data Guard over a WAN

    - by Dave LeJeune
    Hi - We are in process of looking at using Oracle's Data Guard to replicate our 11g instance from a colo facility in Washington DC to Chicago. To give some basics we have approximately 25TB of storage and a healthy transaction rate in the 1-2K/sec range. Also, because we are processing data in real-time we have a 24x7x365 requirement for processing data. We don't have any respites as far as volume except for system upgrades (once every few months) where we take the system offline but then course experience a spike in transactions when we bring the system back on-line. Ideally we would want the second instance in the DG configuration semi-online in a read-only fashion for reports/etc. We evaluated DG in 10g and were not overly impressed and research seemed to show that earlier versions had issues with replication over a WAN but I have heard good things about modifications the product has gone through w/ 11g. Can anyone confirm an instance of this size and transaction rate being replicated over a WAN and if so what is the general latency? An information or experiences with a DG implementation that is of this size and scope would really be helpful (or larger - I also realize we are still relatively small compared to many others out there). Many thanks in advance.

    Read the article

  • Generate a collection of changed lines between two revisions of a file using Java

    - by mchr
    I am writing an eclipse plugin which needs to be able to determine which lines of a file have changed compared to a different version of the same file. Is there an existing class or library which I can use for this task? The closest I have found is org.eclipse.compare.internal.merge.DocumentMerger. This can be used to find the information I need but is in an internal package so is not suitable for me to use. I could copy/paste the source of this class and adapt it to my requirements. However, I am hoping there is an existing library to handle textual comparisons.

    Read the article

  • Can log4net appenders be defined in their own XML files?

    - by ladenedge
    I want to define a handful of (ADO.NET) appenders in my library, but allow users of my library to configure the use of those appenders. Something like this seems to be what I want: XmlConfigurator.Configure(appenderStream1); XmlConfigurator.Configure(appenderStream2); XmlConfigurator.Configure(); But that doesn't seem work, in spite of the debug output containing messages like this: Configuration update mode [Merge]. What is the right way to do this? Or, is there an alternative to asking users to duplicate large chunks of XML configuration?

    Read the article

  • Finding mySQL duplicates, then merging data

    - by Michael Pasqualone
    I have a mySQL database with a tad under 2 million rows. The database is non-interactive, so efficiency isn't key. The (simplified) structure I have is: `id` int(11) NOT NULL auto_increment `category` varchar(64) NOT NULL `productListing` varchar(256) NOT NULL Now the problem I would like to solve is, I want to find duplicates on productListing field, merge the data on the category field into a single result - deleting the duplicates. So given the following data: +----+-----------+---------------------------+ | id | category | productListing | +----+-----------+---------------------------+ | 1 | Category1 | productGroup1 | | 2 | Category2 | productGroup1 | | 3 | Category3 | anotherGroup9 | +----+-----------+---------------------------+ What I want to end up is with: +----+----------------------+---------------------------+ | id | category | productListing | +----+----------------------+---------------------------+ | 1 | Category1,Category2 | productGroup1 | | 3 | Category3 | anotherGroup9 | +----+----------------------+---------------------------+ What's the most efficient way to do this either in pure mySQL query or php?

    Read the article

  • Compare two audio files of beat/tempo and rating in iphone

    - by Senthil Kumar
    Hello, I want to develop iPhone application should have the ability to count the number of phrases that are received when user sing on mic. This application should also have the ability to decipher whether the users phrases are in or out of cadence with a preset beat.When user sing on mic Instrumental music only play. So I have to merge the User Recorded voice with Instrumental music this is one Audio file.Already i have on original Song file.I have to compare both and give the Rating to users. [Note: Instrumental music is without vocal of Original Song file] Can you please help me?. Thanks Vadivelu

    Read the article

  • Velocity templates seem to fail with UTF-8

    - by steve
    Hi, i have been trying to use a velocity Template with the following content: Sübjäct $item everything works fine except the translation of the tow unicode characters. The result string printed on the commandline looks like: Sübjäct foo I searched the velocity website and the web an this issue, and came uo with differnt font encoding options, which i added to my code. But those won't help. This is the actuall code: velocity.setProperty("file.resource.loader.path", absPath); velocity.setProperty("input.encoding", "UTF-8"); velocity.setProperty("output.encoding", "UTF-8"); Template t = velocity.getTemplate("subject.vm"); t.setEncoding("UTF-8"); StringWriter sw = new StringWriter(); t.merge(null, sw); System.out.println(sw.getBuffer()); Can anyone give me some hints, how to fix this issue?

    Read the article

  • Is there a "dual user check-in" source control system?

    - by Zubair
    Are there any source control systems that require another user to validate the source code "before" it can be checked-in? I want to know as this is one technique to make sure that code quality is high. Update: There has been talk of "Branches" in the answers, and while I feel branches have there place I think that branchs are something different as when a developer's code is ready to go into the main branch it "should" be checked. Most often though I see that when this happens a lead developer or whoever is responsible for the merge into the main branch/stream just puts the code into the main branch as long as it "compiles" and does no more checks than that. I want the idea of two people putting their names to the code at an early stage so that it introduces some responsibility, and also because the code is cheaper to fix early on and is also fresh in the developers mind.

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • How to create a splash activity , call a second activity via a button, simple data form 5 fields and 2 buttons

    - by Mike
    New to Android need help with one solid build that I can refer to and study for future projects. The first activity is a background image with a button,when clicked the it takes you to the second activity which is a form with 5 data fields and 2 buttons. One button calls an intent to take a picture within the app and one button that submits the data from the form along with the picture to URL. lastly a third activity that says complete thank you. I can make some of this but don't know how to link button or open and merge camera with the app to be sent as a package of data. I suppose I could also hook into GPS acquire location as well as the camera call?

    Read the article

  • how to configure hibernate not to update @Version on each access to entity

    - by radai
    i have a simple query that returns an entity, and when i look at hibernate SQL output i see that when i execute this query hibernate updates the @Version field (on each consecutive read the @version field is updated). i dont modify anything in the entity i fetch, and i dont pass is as an argument to either persist or merge. this effectively means every read i make turns into a read+write. i've tried setting the lock mode t oboth NONE (jpa 2) and READ (jpa 1) to no avail. is there any way to achieve this? if so, is there any way to set this as the default behavior in persistence.xml in some way ? im using jpa2 over hibernate 3.6

    Read the article

  • How to line up columns using paste(1)? or how to make an aligned table merging lines in the shell?

    - by nn
    Hi, I want to merge lines such that the merged lines are aligned on the same boundary. UNIX paste(1) does this nicely when lines all meet at the same tab boundary, but when lines differ in size (in the file that lines are being merged into), the text comes out awkward. Example of paste(1) that has the desired effect: $ echo -e "a\nb\nccc\nd" | paste - - a b ccc d Example of paste(1) with undesired effect: $ echo -e "a\nb\ncccccccccccc\nd" | paste - - a b cccccccccccc d Note how the 2nd column doesn't line up. I want 'b' to line up with 'd', which requires an additional tab. Unfortunately I believe this is the limit for the paste utility, so if anyone has any idea of how to get the desired effect above, I'd love to hear it.

    Read the article

  • DVCS - What's the downside of rewriting unpublished history?

    - by user1447278
    I was wondering what in particular is the downside of "losing history" in a development process. One famous example is of course git rebase -i / git merge --squash, but also what is described here: http://fourkitchens.com/blog/2009/04/20/alternatives-rebasing-bazaar under "I want to clean up my commit history prior to submitting my changes to the mainline." I can see that exporting patches and applying them to another branch would lose the "history" of the branch, but why would that branch and its commit history be useful after it has been merged? Can someone elaborate on why such techniques are considered "dirty"? Why does it matter in which order changes were originally committed in the first place as long as they can be applied to the main branch?

    Read the article

  • Yes another ON DUPLICATE KEY UPDATE query

    - by Andy Gee
    I've been reading all the questions on here but I still don't get it I have two identical tables of considerable size. I would like to update table packages_sorted with data from packages_sorted_temp without destroying the existing data on packages_sorted Table packages_sorted_temp contains data on only 2 columns db_id and quality_rank Table packages_sorted contains data on all 35 columns but quality_rank is 0 The primary key on each table is db_id and this is what I want to trigger the ON DUPLICATE KEY UPDATE with. In essence how do I merge these two tables by and change packages_sorted.quality_rank of 0 to the quality_rank stored in packages_sorted_temp under the same primary key Here's what's not working INSERT INTO `packages_sorted` ( `db_id` , `quality_rank` ) SELECT `db_id` , `quality_rank` FROM `packages_sorted_temp` ON DUPLICATE KEY UPDATE `packages_sorted`.`db_id` = `packages_sorted`.`db_id`

    Read the article

  • Merging Brach into Trunk...

    - by jimbo
    Now I know this has been asked a few times, but i'm still not getting it... I use Versions which is doing a great job, I am also using beansalk and they are working nicely together. However, I have been working on a branch of the trunk (sandbox) which I now want to merge back into the trunk. Versions can't do this, so it's into Terminal. Sandbox is reversion 256 and the Trunk (because of a few other amends) is 255. Any help on this would be great, and read a few bits, but still a little lost...

    Read the article

  • Need Insight - What is the best practice for syncing up a production database that will be used on a

    - by james
    I have a site set up using CakePHP and MySQL and I want to work on a test database without disrupting my live site in case something goes wrong. I have another busy site, but my test site runs off the live database which can be occasionally nerve wracking. What do I do if I change a table name in the test db and I want it changed in the live database? Or if I remove a record from the test database. Is there a way to diff the changes? How do I even merge those changes? How does this interfere with live user edits and things of that nature? Hopefully some of you working devs can share some insight!

    Read the article

  • Is JPA + EJB to much slow (or heavy) for over Internet transactions?

    - by Xavier Callejas
    Hi, I am developing a stand-alone java client application that connects to a Glassfish v3 application for JPA/EJB facade style transactions. In other words, my client application do not connect directly to the database to CRUD, but it transfers JPA objets using EJB stateless sessions. I have scenarios where this client application will be used in an external network connected with a VPN over Internet with a client connection of 512kbp/DSL, and a simple query takes so much time, I'm seeing the traffic graph and when I merge a entity in the client application I see megabytes of traffic (I couldn't believe how a purchase order entity could weight more than 1 mb). I have LAZY fetch in almost every many-to-many relationship, but I have a lot of many-to-one relationships between entities (but this is the great advantage of JPA!). Could I do something to accelerate the the speed of transactions between JPA/EJB server and the remote java client? Thank you in advance.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >