Search Results

Search found 3179 results on 128 pages for 'merge replication'.

Page 52/128 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Mercurial to Mercurial to Subversion Workflow Problem

    - by Dalroth
    We're migrating from Subversion to Mercurial. To facilitate the migration, we're creating an intermediate Mercurial repository that is a clone of our Subversion repository. All developers will be begin switching over to the Mercurial repository, and we'll periodically push changes from the intermediate Mercurial repository to the existing Subversion repository. After a period of time, we'll simply obsolete the Subversion repository and the intermediate Mercurial repository will become the new system of record. Dev 1 Local --+--> Mercurial --+--> Subversion Dev 2 Local --+ + Dev 3 Local --+ + Dev 4 -------------------------+ I've been testing this out, but I keep running into a problem when I push changes from my local repository, to the intermediate Mercurial repository, and then up into our Subversion repository. On my local machine, I have a changeset that is committed and ready to be pushed to our intermediate Mercurial repository. Here you can see it is revision #2263 with hash 625... I push only this changeset to the remote repository. So far, everything looks good. The changeset has been pushed. hg update 1 files updated, 0 files merged, 0 files removed, 0 files unresolved I now switch over to the remote repository, and update the working directory. hg push pushing to svn://... searching for changes [r3834] bmurphy: database namespace pulled 1 revisions saving bundle to /srv/hg/repository/.hg/strip-backup/62539f8df3b2-temp adding branch adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files rebase completed Next, I push the change up to Subversion, works great. At this point, the change is in the Subversion repository and I return attention back to my local client. I pull changes to my local machine. Huh? I've now got two changesets. My original changeset appears as a local branch now. The other changeset has a new revision number 2264, and a new hash 10c1... Anyway, I update my local repo to the new revision. I'm now switched over. So, I finally click the "determine and mark outgoing changesets" and as you can see Mercurial still wants to push out my previous changesets even though they've already been pushed. Clearly, I'm doing something wrong. I also can't merge the two revisions. If I merge the two revisions on my local machine, I end up with a "merge" commit. When I push that merge commit out to the intermediate Mercurial repository, I can no longer push changes out to our Subversion repository. I end up with the following problem: hg update 0 files updated, 0 files merged, 0 files removed, 0 files unresolved hg push pushing to svn://... searching for changes abort: Sorry, can't find svn parent of a merge revision. and I have to rollback the merge to get back to a working state. What am I missing?

    Read the article

  • Mercurial branching a branch doesn't display right in hg serve or hg view

    - by Mystic
    I've been doing some development on a branch and realized that before it could be complete something else need to be done first. I decided that I would branch my current branch and do the requiste changes in that branch then merge them back together and then merge my working branch into default. Basically I expected this: | | + requiste work branch commit. | |/ | + working branch commit |/ +Default branch commit and in the end what I expect to do is this: + Merge into defualt |\ | + Merge requisite work into working branch | | \ | | + requiste work branch commit. | |/ | + working branch commit |/ +Default branch commit What I'm getting in both hg view and hg serve is this: | + requiste work branch commit. | | | + working branch commit |/ +Default branch commit However, when I look at the commit log "requiste work branch commit" is marked as a part of a different branch. Am I doing something wrong? Is this a bug in hg view and hg serve? Anyone experienced this before?

    Read the article

  • O(log n) algorithm for computing rank of union of two sorted lists?

    - by Eternal Learner
    Given two sorted lists, each containing n real numbers, is there a O(log?n) time algorithm to compute the element of rank i (where i coresponds to index in increasing order) in the union of the two lists, assuming the elements of the two lists are distinct? I can think of using a Merge procedure to merge the 2 lists and then find the A[i] element in constant time. But the Merge would take O(n) time. How do we solve it in O(log n) time?

    Read the article

  • O(log n) algorithm to find the element having rank i in union of pre-sorted lists

    - by Eternal Learner
    Given two sorted lists, each containing n real numbers, is there a O(log?n) time algorithm to compute the element of rank i (where i coresponds to index in increasing order) in the union of the two lists, assuming the elements of the two lists are distinct? I can think of using a Merge procedure to merge the 2 lists and then find the A[i] element in constant time. But the Merge would take O(n) time. How do we solve it in O(log n) time?

    Read the article

  • MS Word: Mailmerge hyperlinks with query get url string with a mergefield.

    - by oterrada
    Hi, I'm trying to send an email merging one document (docx) with a contacts database (via OleDB). Using MSWord2007, it seems easy (it works for easy things: name, address, ...) but I can't find how fill an query-get url string with a merge field inside an hyperlink. An hyperlink like: Click here where here is an hyperlink to http://domain.com/[email protected] where [email protected] is the address of the contact, a merge field. I've tried the special/merge fields: {HYPERLINK "http://domain.com/test.php?contact={MERGEFIELD contactEmail}" \o "Here"} but it doesn't work, sometimes it merge only the first record and then send all the mails with the same address, I got this alternating the view with ALT+F9. This support doc doesn't work for me because I don't have the complete URL in my database for each contact, and I can't change the design of the table or add a view (it's from the CRM), I don't like to do it exporting the table and adding the field. Any idea? Thanks in advance,

    Read the article

  • SVN supports historical merges so how is Mercurial better?

    - by radman
    Hi, I'm a long time SVN user and have been hearing a lot of brou ha ha with regard to mercurial and decentralised version control systems in general. The main touted feature that I am aware of is that merging in Mercurial is much easier because it records information for each merge so each successive merge is aware of the previous ones. Now as stated in the red book, in the section to do with merging, SVN already supports this with mergeinfo. Now I have not actually used this feature (although I wanted to, our repo version wasn't recent enough) but is this SVN feature particularly different to what Mercurial offers? For anyone who is not aware the suggested work flow for historical merging in svn is this: branch from the development trunk to do your own thing. Regularly merge changes from trunk into your branch to stay up to date. Merge back when your done with the mergeinfo to smooth the process. Without historical data merging this is a nightmare because the comparison is strictly on the differences in the files and does not take into account the steps taken on the way. So each change in the development trunk puts you further into possible conflict when you merge back. Now what I would like to know is: Does merging using Mercurial provide a significant advantage when compared with mergeinfo in SVN or is this just a lot of hot air about nothing? Has anyone used the mergeinfo feature in SVN and how good is it actually in practice?

    Read the article

  • is there a way to get a "subtree" from hclust ? (R)

    - by Tal Galili
    Hello all, I wish to create a "subtree" from an hclust object. For example, let's say I have the following object: a <- list() # initialize empty object a$merge <- matrix(c(-1, -2, -3, -4, 1, 2, -5,-6, 3,4), nc=2, byrow=TRUE ) a$height <- c(1, 1.5, 3,4,4.5) # define merge heights a$order <- 1:6 # order of leaves(trivial if hand-entered) a$labels <- 1:6# LETTERS[1:4] # labels of leaves class(a) <- "hclust" # make it an hclust object plot(a) # look at the result Now I wish the extract from it the following subtree: a <- list() # initialize empty object a$merge <- matrix(c(-1, -2, -3, -4, 1, 2 ), nc=2, byrow=TRUE ) a$height <- c(1, 1.5, 3) # define merge heights a$order <- 1:4 # order of leaves(trivial if hand-entered) a$labels <- 1:4# LETTERS[1:4] # labels of leaves class(a) <- "hclust" # make it an hclust object plot(a) # look at the result How could I access it? (I know that cutree could get me the objects of the sub tree, but not create an actual hclust object) Thanks for any help, Tal

    Read the article

  • Several ifstream vs. ifstream + constant seeking

    - by SpyBot
    I'm writing an external merge sort. It works like that: read k chunks from big file, sort them in memory, perform k-way merge, done. So I need to sequentially read from different portions of the file during the k-way merge phase. What's the best way to do that: several ifstreams or one ifstream and seeking? Also, is there a library for easy async IO?

    Read the article

  • why it throws index out of bounds exception??

    - by Johanna
    Hi I want to use merge sort for sorting my doubly linked list.I have created 3 classes(Node,DoublyLinkedList,MergeSort) but it will throw this exception for these lines: 1.in the getNodes method of DoublyLinkedList---> throw new IndexOutOfBoundsException(); 2.in the add method of DoublyLinkedList-----> Node cursor = getNodes(index); 3.in the sort method of MergeSort class------> listTwo.add(x,localDoublyLinkedList.getValue(x)); 4.in the main method of DoublyLinkedList----->merge.sort(); this is my Merge class:(I put the whole code for this class for beter understanding) public class MergeSort { private DoublyLinkedList localDoublyLinkedList; public MergeSort(DoublyLinkedList list) { localDoublyLinkedList = list; } public void sort() { if (localDoublyLinkedList.size() <= 1) { return; } DoublyLinkedList listOne = new DoublyLinkedList(); DoublyLinkedList listTwo = new DoublyLinkedList(); for (int x = 0; x < (localDoublyLinkedList.size() / 2); x++) { listOne.add(x, localDoublyLinkedList.getValue(x)); } for (int x = (localDoublyLinkedList.size() / 2) + 1; x < localDoublyLinkedList.size(); x++) { listTwo.add(x, localDoublyLinkedList.getValue(x)); } //Split the DoublyLinkedList again MergeSort sort1 = new MergeSort(listOne); MergeSort sort2 = new MergeSort(listTwo); sort1.sort(); sort2.sort(); merge(listOne, listTwo); } public void merge(DoublyLinkedList a, DoublyLinkedList b) { int x = 0; int y = 0; int z = 0; while (x < a.size() && y < b.size()) { if (a.getValue(x) < b.getValue(y)) { localDoublyLinkedList.add(z, a.getValue(x)); x++; } else { localDoublyLinkedList.add(z, b.getValue(y)); y++; } z++; } //copy remaining elements to the tail of a[]; for (int i = x; i < a.size(); i++) { localDoublyLinkedList.add(z, a.getValue(i)); z++; } for (int i = y; i < b.size(); i++) { localDoublyLinkedList.add(z, b.getValue(i)); z++; } } } and just a part of my DoublyLinkedList: private Node getNodes(int index) throws IndexOutOfBoundsException { if (index < 0 || index > length) { throw new IndexOutOfBoundsException(); } else { Node cursor = head; for (int i = 0; i < index; i++) { cursor = cursor.getNext(); } return cursor; } } public void add(int index, int value) throws IndexOutOfBoundsException { Node cursor = getNodes(index); Node temp = new Node(value); temp.setPrev(cursor); temp.setNext(cursor.getNext()); cursor.getNext().setPrev(temp); cursor.setNext(temp); length++; } public static void main(String[] args) { int i = 0; i = getRandomNumber(10, 10000); DoublyLinkedList list = new DoublyLinkedList(); for (int j = 0; j < i; j++) { list.add(j, getRandomNumber(10, 10000)); MergeSort merge = new MergeSort(list); merge.sort(); System.out.println(list.getValue(j)); } } PLEASE help me thanks alot.

    Read the article

  • merging selected revisions from one branch on another in Mercurial

    - by Assaf Lavie
    Is it possible to merge a range of revisions from one branch to another in Mercurial? e.g. |r1 |r2 |r3 |\___ | | r5 | | r6 | | r7 | | ... | | r40 |r41 If I want to merge revisions 6 & 7, but not 5, into the main branch - is this possible? What about multiple selected revision ranges from branch A to branch B? e.g. merge 4-7, 20-25 and 30-34? (this isn't a real case, just an illustration. I'm trying to understand if hg has this revision-range merge feature that I know svn has)

    Read the article

  • How to resolve merging conflicts in Mercurial (v1.0.2)?

    - by lajos
    I have a merging conflict, using Mercurial 1.0.2: merging test.h warning: conflicts during merge. merging test.h failed! 6 files updated, 0 files merged, 0 files removed, 1 files unresolved There are unresolved merges, you can redo the full merge using: hg update -C 19 hg merge 18 I can't figure out how to resolve this. Google search results instruct to use: hg resolve but for some reason my Mercurial (v1.0.2) doesn't have a resolve command: hg: unknown command 'resolve' How can I resolve this conflict?

    Read the article

  • Inheritance of jQuery's prototype partially fails

    - by user1065745
    I want to use Coffeescript to create an UIObject class. This class should inherit from jQuery, so that instances of UIObject can be used as if they where created with jQuery. class UIObject isObject: (val) -> typeof val is "object" constructor: (tag, attributes) -> @merge jQuery(tag, attributes), this @UIObjectProperties = {} merge: (source, destination) -> for key of source if destination[key] is undefined destination[key] = source[key] else if @isObject(source[key]) @merge(source[key], destination[key]) return It partially works. Consider the Foobar class below: class Foobar extends UIObject constructor: -> super("<h1/>", html: "Foobar") $("body").append(new Foobar) works fine. BUT: (new Foobar).appendTo("body") places the tag, but also raises RangeError: Maximum call stack size exceeded. Was it just a bad idea to inherit from jQuery? Or is there a solurion? For those who don't know CoffeeScript, the JavaScript source is: var Foobar, UIObject; var __hasProp = Object.prototype.hasOwnProperty, __extends = function(child, parent) { for (var key in parent) { if (__hasProp.call(parent, key)) child[key] = parent[key]; } function ctor() { this.constructor = child; } ctor.prototype = parent.prototype; child.prototype = new ctor; child.__super__ = parent.prototype; return child; }; UIObject = (function () { UIObject.prototype.isObject = function (val) { return typeof val === "object"; }; function UIObject(tag, attributes) { this.merge(jQuery(tag, attributes), this); this.UIObjectProperties = {}; } UIObject.prototype.merge = function (source, destination) { var key; for (key in source) { if (destination[key] === void 0) { destination[key] = source[key]; } else if (this.isObject(source[key])) { this.merge(source[key], destination[key]); } } }; return UIObject; })(); Foobar = (function () { __extends(Foobar, UIObject); function Foobar() { Foobar.__super__.constructor.call(this, "<h1/>", { html: "Foobar" }); } return Foobar; })();

    Read the article

  • O(log n) algorithm for merging lists and computing rank?

    - by Eternal Learner
    Given two sorted lists, each containing n real numbers, is there a O(log?n) time algorithm to compute the element of rank i (where i coresponds to index in increasing order) in the union of the two lists, assuming the elements of the two lists are distinct? I can think of using a Merge procedure to merge the 2 lists and then find the A[i] element in constant time. But the Merge would take O(n) time. How do we solve it in O(log n) time?

    Read the article

  • GhostScript PDF Merging (Losing Editable Fields)

    - by Scott
    I'm using GhostScript to merge to PDFs into one PDF. One of the PDFs has textbox fields (editable fields) that I created in Adobe Acrobat Pro 9. When I merge these two PDFs with GhostScript I lose the textbox fields. Is there any way to merge these files (using GS or some other free linux software) that keeps the textbox fields intact?

    Read the article

  • Merging all changesets associated with a WorkItem in Team Foundation Server

    - by Rowland Shaw
    We're trailing the use of the built in bug tracking, and have written some integration into our helpdesk software that allows for escalation via workitems. One thing I haven't found out how to do, is to merge all changes associated with a work item (say to go from dev branch to main) - I appreciate you can double click on a changeset in the merge dialog to view if it is associated with a workitem, and also that I can select individual changesets, and groups of adjacent changesets; but there doesn't appear to be any way to merge changes by workitem?

    Read the article

  • How to find the number of inversions in an array ?

    - by Michael
    This is an phone interview question: "Find the number of inversions in an array". I guess they mean O(N*log N) solution since O(N^2) is trivial. I guess it cannot be better than O(N*log N) since sorting is O(N*log N) I have checked a similar question from SO and can summarize the answers as follows: Calculate half the distance the elements should be moved to sort the array : copy the array and sort the copy. For each element of the original array a[i] find it's position j in the sorted copy (binary search) and sum abs(i - j)/2. Modify merge sort : modify merge to count inversions between two sorted arrays (it takes O(N)) and run merge sort with the modified merge. Does it make sense ? Are there other (maybe simpler) solution ? Isn't it too hard for a phone interview ?

    Read the article

  • How to prevent an automerge using git?

    - by marckassay
    I am trying to merge a local branch into the master branch without having Git to do an automerge. I would like to “hand pick” what I would like to be merged into master. When I use Git’s difftool command, I am able to diff and select what I want to be added into the master branch. But then when I do a merge, I will lose what I selected prior because Git will do an automerge. I can commit the changes into master prior to the merge, but doing so seems unnatural. And Git’s mergetool is only available when there are conflicts from a merge. But if Git does an automerge then usually there aren’t conflicts, so I am unable to run the mergetool command.

    Read the article

  • MySQL is hogging my server resources

    - by Reacen
    Does anyone have any idea of what can cause this weird behaviour and how I go about fixing it? This is all coming from MySQL only (both RAM and CPU usage), for about 10 minutes after I reboot my Java game server (that has a pool of 256 connections). There are not that many queries and I think it may be more of a MySQL misconfiguration problem. My server: 3.20 GHz * 6 core / 24 GB RAM / 64 bit Windows Server 2003. My game server: Java server, with 256 MySQL connections pool (MyISAM engine), about 500,000 accounts, and 9 million rows of game items in database and about 3,000 players are connected. After about 15 minutes of the game server reboot, the server resumes its stability and CPU usage drop down to 1% ~ 5% and memory to 6 GB. Here is a copy of my MySQL configuration. Also, any advice about my MySQL configuration will be appreciated. I really set it up almost at random. # Example MySQL config file for very large systems. # # This is for a large system with memory of 1G-2G where the system runs mainly # MySQL. # # You can copy this file to # /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options (in this # installation this directory is C:\mysql\data) or # ~/.my.cnf to set user-specific options. # # In this file, you can use all long options that a program supports. # If you want to know which options a program supports, run the program # with the "--help" option. # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /tmp/mysql.sock # Here follows entries for some specific programs # The MySQL server [mysqld] #log=c:\mysql.log port = 3306 socket = /tmp/mysql.sock skip-locking key_buffer_size = 2572M max_allowed_packet = 64M table_open_cache = 512 sort_buffer_size = 128M read_buffer_size = 128M read_rnd_buffer_size = 128M myisam_sort_buffer_size = 500M thread_cache_size = 32 query_cache_size = 1948M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 12 max_connections = 5000 # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # #skip-networking # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 # Replication Slave (comment out master section to use this) # # To configure this host as a replication slave, you can choose between # two methods : # # 1) Use the CHANGE MASTER TO command (fully described in our manual) - # the syntax is: # # CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>, # MASTER_USER=<user>, MASTER_PASSWORD=<password> ; # # where you replace <host>, <user>, <password> by quoted strings and # <port> by the master's port number (3306 by default). # # Example: # # CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306, # MASTER_USER='joe', MASTER_PASSWORD='secret'; # # OR # # 2) Set the variables below. However, in case you choose this method, then # start replication for the first time (even unsuccessfully, for example # if you mistyped the password in master-password and the slave fails to # connect), the slave will create a master.info file, and any later # change in this file to the variables' values below will be ignored and # overridden by the content of the master.info file, unless you shutdown # the slave server, delete master.info and restart the slaver server. # For that reason, you may want to leave the lines below untouched # (commented) and instead use CHANGE MASTER TO (see above) # # required unique id between 2 and 2^32 - 1 # (and different from the master) # defaults to 2 if master-host is set # but will not function as a slave if omitted #server-id = 2 # # The replication master for this slave - required #master-host = <hostname> # # The username the slave will use for authentication when connecting # to the master - required #master-user = <username> # # The password the slave will authenticate with when connecting to # the master - required #master-password = <password> # # The port the master is listening on. # optional - defaults to 3306 #master-port = <port> # # binary logging - not required for slaves, but recommended #log-bin=mysql-bin # # binary logging format - mixed recommended #binlog_format=mixed # Point the following paths to different dedicated disks #tmpdir = /tmp/ #log-update = /path-to-dedicated-directory/hostname # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = C:\mysql\data/ #innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend #innodb_log_group_home_dir = C:\mysql\data/ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 384M #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 100M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 64M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [myisamchk] key_buffer_size = 256M sort_buffer_size = 256M read_buffer = 8M write_buffer = 8M [mysqlhotcopy] interactive-timeout

    Read the article

  • Discover the MySQL Connect Content Catalog!

    - by Bertrand Matthelié
    The MySQL Connect content catalog is now live! MySQL Connect offers you a unique opportunity to attend:Keynotes including: "The State of the Dolphin", by Oracle's Chief Corporate Architect Edward Screven and VP of MySQL Engineering Tomas Ulin. An exciting panel on "Current MySQL Usage Models and Future Developments" with Davi Arnaud from LinkedIn, Daniel Austin from PayPal, Mark Callaghan from Facebook and Calvin Sun from Twitter. Over 65 Conference sessions enabling you to hear from: Oracle MySQL engineers on MySQL 5.6, InnoDB, replication, performance tuning, security, NoSQL, MySQL Cluster, Big Data...and more. MySQL customers including the US Census Bureau, Big Fish Games, Booking.com, Ticketmaster, and Tumblr. Internationally recognized MySQL community members and partners on topics such as performance, MySQL 5.6, backup, MySQL in the Cloud, OpenStack and Hadoop. 6 Birds-of-a-feather sessions about sharding, replication, backup, and other subjects.8 Hands-On Labs designed to give you hands-on experience about MySQL replication, the MySQL Performance Schema, MySQL Cluster...and more.6 Tutorials providing you in-depth knowledge about MySQL Performance Tuning best practices, enhancing productivity with MySQL 5.6 new features or the essentials to get started with MySQL (tutorials are available as an add-on package to MySQL Connect registrants).Demo pods and exhibitors, to learn more about Partner’s and Oracle’s offerings.Receptions on both Saturday and Sunday nights, enabling you to ask all your questions to Oracle's MySQL engineers and to network with some of the world’s best MySQL professionals.Check out the MySQL Connect content catalog and find out about the amazing sessions you have the opportunity to attend.Reminder: The early bird discount is running until July 19, Register Now to save US$500! Plan to Attend Oracle OpenWorld or JavaOne? Add the MySQL Connect event to your Oracle OpenWorld or JavaOne registration for only US$100. Exhibit/Sponsorship opportunities are also available. We look forward to seeing you at MySQL Connect!

    Read the article

  • Is this table replicated?

    - by fatherjack
    Another in the potentially quite sporadic series of I need to do ... but I cant find it on the internet. I have a table that I think might be involved in replication but I don't know which publication its in... We know the table name - 'MyTable' We have replication running on our server and its replicating our database, or part of it - 'MyDatabase'. We need to know if the table is replicated and if so which publication is going to need to be reviewed if we make changes to the table. How? USE MyDatabase GO /* Lots of info about our table but not much that's relevant to our current requirements*/ SELECT * FROM sysobjects WHERE NAME = 'MyTable' -- mmmm, getting there /* To quote BOL - "Contains one row for each merge article defined in the local database. This table is stored in the publication database.replication" interesting column is [pubid] */ SELECT * FROM dbo.sysmergearticles AS s WHERE NAME = 'MyTable' -- really close now /* the sysmergepublications table - Contains one row for each merge publication defined in the database. This table is stored in the publication and subscription databases. so this would be where we get the publication details */ SELECT * FROM dbo.sysmergepublications AS s WHERE s.pubid = '2876BBD8-3D4E-4ED8-88F3-581A659E8144' -- DONE IT. /* Combine the two tables above and we get the information we need */ SELECT s.[name] AS [Publication name] FROM dbo.sysmergepublications AS s INNER JOIN dbo.sysmergearticles AS s2 ON s.pubid = s2.pubid WHERE s2.NAME = 'MyTable' So I now know which

    Read the article

  • Voxel Face Crawling (Mesh simplification, possibly using greedy)

    - by Tim Winter
    This is in regards to a Minecraft-like terrain engine. I store blocks in chunks (16x256x16 blocks in a chunk). When I generate a chunk, I use multiple procedural techniques to set the terrain and to place objects. While generating, I keep one 1D array for the full chunk (solid or not) and a separate 1D array of solid blocks. After generation, I iterate through the solid blocks checking their neighbors so I only generate block faces that don't have solid neighbors. I store which faces to generate in their own list (that's 6 lists, one per possible face). When rendering a chunk, I render all lists in the camera's current chunk and only the lists facing the camera in all other chunks. Using a 2D atlas with this little shader trick Andrew Russell suggested, I want to merge similar faces together completely. That is, if they are in the same list (same normal), are adjacent to each other, have the same light level, etc. My assumption would be to have each of the 6 lists sorted by the axis they rest on, then by the other two axes (the list for the top of a block would be sorted by it's Y value, then X, then Z). With this alone, I could quite easily merge strips of faces, but I'm looking to merge more than just strips together when possible. I've read up on this greedy meshing algorithm, but I am having a lot of trouble understanding it. To even use it, I would think I'd need to perform a type of flood-fill per sorted list to get the groups of merge-able faces. Then, per group, perform the greedy algorithm. It all sounds awfully expensive if I would ever want dynamic terrain/lighting after initial generation. So, my question: To perform merging of faces as described (ignoring whether it's a bad idea for dynamic terrain/lighting), is there perhaps an algorithm that is simpler to implement? I would also quite happily accept an answer that walks me through the greedy algorithm in a much simpler way (a link or explanation). I don't mind a slight performance decrease if it's easier to implement or even if it's only a little better than just doing strips. I worry that most algorithms focus on triangles rather than quads and using a 2D atlas the way I am, I don't know that I could implement something triangle based with my current skills. PS: I already frustum cull per chunk and as described, I also cull faces between solid blocks. I don't occlusion cull yet and may never.

    Read the article

  • Git commit messages with nvie branching model

    - by eykanal
    This Git branching model recommends branching for all development efforts and merging when complete: Branch Develop Merge when complete I'm wondering how this works in practice, given that performing a merge off this model will simply add a commit to the develop with whatever commit message happened to be the last one in line. Do people using this model do an interactive rebase on the feature branch before committing? If not, how do you ensure that the commits make sense on the main branch?

    Read the article

  • Is it OK to repeat code for unit tests?

    - by Pete
    I wrote some sorting algorithms for a class assignment and I also wrote a few tests to make sure the algorithms were implemented correctly. My tests are only like 10 lines long and there are 3 of them but only 1 line changes between the 3 so there is a lot of repeated code. Is it better to refactor this code into another method that is then called from each test? Wouldn't I then need to write another test to test the refactoring? Some of the variables can even be moved up to the class level. Should testing classes and methods follow the same rules as regular classes/methods? Here's an example: [TestMethod] public void MergeSortAssertArrayIsSorted() { int[] a = new int[1000]; Random rand = new Random(DateTime.Now.Millisecond); for(int i = 0; i < a.Length; i++) { a[i] = rand.Next(Int16.MaxValue); } int[] b = new int[1000]; a.CopyTo(b, 0); List<int> temp = b.ToList(); temp.Sort(); b = temp.ToArray(); MergeSort merge = new MergeSort(); merge.mergeSort(a, 0, a.Length - 1); CollectionAssert.AreEqual(a, b); } [TestMethod] public void InsertionSortAssertArrayIsSorted() { int[] a = new int[1000]; Random rand = new Random(DateTime.Now.Millisecond); for (int i = 0; i < a.Length; i++) { a[i] = rand.Next(Int16.MaxValue); } int[] b = new int[1000]; a.CopyTo(b, 0); List<int> temp = b.ToList(); temp.Sort(); b = temp.ToArray(); InsertionSort merge = new InsertionSort(); merge.insertionSort(a); CollectionAssert.AreEqual(a, b); }

    Read the article

  • How to cleanly add after-the-fact commits from the same feature into git tree

    - by Dennis
    I am one of two developers on a system. I make most of the commits at this time period. My current git workflow is as such: there is master branch only (no develop/release) I make a new branch when I want to do a feature, do lots of commits, and then when I'm done, I merge that branch back into master, and usually push it to remote. ...except, I am usually not done. I often come back to alter one thing or another and every time I think it is done, but it can be 3-4 commits before I am really done and move onto something else. Problem The problem I have now is that .. my feature branch tree is merged and pushed into master and remote master, and then I realize that I am not really done with that feature, as in I have finishing touches I want to add, where finishing touches may be cosmetic only, or may be significant, but they still belong to that one feature I just worked on. What I do now Currently, when I have extra after-the-fact commits like this, I solve this problem by rolling back my merge, and re-merging my feature branch into master with my new commits, and I do that so that git tree looks clean. One clean feature branch branched out of master and merged back into it. I then push --force my changes to origin, since my origin doesn't see much traffic at the moment, so I can almost count that things will be safe, or I can even talk to other dev if I have to coordinate. But I know it is not a good way to do this in general, as it rewrites what others may have already pulled, causing potential issues. And it did happen even with my dev, where git had to do an extra weird merge when our trees diverged. Other ways to solve this which I deem to be not so great Next best way is to just make those extra commits to the master branch directly, be it fast-forward merge, or not. It doesn't make the tree look as pretty as in my current way I'm solving this, but then it's not rewriting history. Yet another way is to wait. Maybe wait 24 hours and not push things to origin. That way I can rewrite things as I see fit. The con of this approach is time wasted waiting, when people may be waiting for a fix now. Yet another way is to make a "new" feature branch every time I realize I need to fix something extra. I may end up with things like feature-branch feature-branch-html-fix, feature-branch-checkbox-fix, and so on, kind of polluting the git tree somewhat. Is there a way to manage what I am trying to do without the drawbacks I described? I'm going for clean-looking history here, but maybe I need to drop this goal, if technically it is not a possibility.

    Read the article

  • Hyper-V Live Migration across Sites!

    - by Ryan Roussel
    One of the great sessions I sat in on at Tech Ed this week was stretching a Windows 2008 R2 Hyper-V  Failover Cluster across sites.  With this ability, you could actually implement a Hyper-V cluster where you could migrate or even Live Migrate VMs across sites.   With this area’s propensity for Hurricanes, this will be a very popular topic for me over the next few months. While this technology is possible today, it’s also very complicated and can be very expensive to implement.    First your WAN connection has to support the ability to trunk your VLAN across both sites in order to Live Migrate.  This means you can’t use a Layer 3 routed connection like MPLS.  It has to be a Metro Ethernet connection or "Dark Fiber”.  Dark Fiber is unused Fiber already in the ground that can be leased from  various providers. Both of these connections would allow you to trunk layer 2 across your WAN.  Cisco does have the ability to trunk layer 2 across a routed connection by muxing the traffic but this is only available in their Nexus product line which has a very steep price tag.   If you are stuck with MPLS or the like and Nexus switching is not a realistic possibility, you will have to implement a multi-subnet cluster in which case Live Migration won’t be possible.  However you can still failover VMs to the remote site with some planning and manual intervention.  The consideration here is that the VMs will be on a different subnet once migrated, so you will have to change the IP addressing of your VMs.  This also has ramifications with DNS and Name resolution to control your down time.  DHCP with Reservations for your VMs is the preferred method to achieve the IP changes as this will automate that part of the process.   Secondly, you will have to have  a mechanism to replicate your storage across both sites.  Many SAN vendors natively support hardware based synchronous and asynchronous replication.  Some even support cluster shared volumes which were introduced in 2008 R2.   If your SANs do not support this natively, there are alternative file based replication products either software based like Double Take or hardware appliance like EMC.  Be sure to check with your vendor on the support of Disk majority if you’re replicating your quorum disk between SANs.   The last consideration is the ability to maintain quorum for your cluster.  If your replication provider does not support Disk Majority through replication, you will have to explore Node Majority with File Share Witness.  This will affect your design as a 3 node cluster with 1 node at the remote site and FSW at the production site would not have the ability to maintain quorum if the production site was lost. MS best practice for this would be to implement an even node cluster with 2 nodes at  each site and the FSW at a third site.   And there you have it.  While some considerations and research goes into implementing this solution, even a multi-subnet solution would be invaluable to organizations in the implementations of “warm” DR sites.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >