Search Results

Search found 4369 results on 175 pages for 'merge tracking'.

Page 74/175 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • white-label collaborative open-source development (e.g. github/sourceforge/google-code in a box) ?

    - by Justin Grant
    Does anyone have a recommendation for an open-source or paid (either packaged or SaaS) solution for integrating collaborative development features into your own website? Here's more details: We currently host an online plugin gallery for our product. Users can upload and download plugins. But users can't easily collaborate on a plugin's development, can't easily report and track bugs on a plugin, can't easily track a plugin's versions or roadmap, etc. Of course, contributors can host their plugin development on github, sourceforge, google code, codeplex, etc. But keeping users on our website has some advantages. For example: We can use single-sign-on to avoid yet another username/password required we can integrate end-user issue tracking into our existing online issue-tracking systems we can get integrated analytics so we can better meet the needs of top contributors as well as downloaders We can easily reward reputation points to committers just like we do for people who answer lots of questions Anyone know a good solution for white-label sites for open-source project developer collaboration?

    Read the article

  • Using Beyond Compare for Visual Diff in TortoiseHg

    - by geoff
    I am trying to use Beyond Compare for Visual Diff in TortoiseHg. eg Right click on a modified file in explorer and select Visual Diff from TortoiseHg context menu... BeyondCompare opens but only shows the 'welcome' screen and not the file I want to diff. Am I missing something? I have setup the mercurial.ini file as follows: [extensions] extdiff = [extdiff] cmd.bcomp = C:\Program Files (x86)\Beyond Compare 3\BCompare.exe opts.bcomp = /ro [tortoisehg] vdiff = bcomp [merge-tools] bcomp.executable = C:\Program Files (x86)\Beyond Compare 3\BComp bcomp.args = $local $other $base $output bcomp.priority = 1 bcomp.premerge = True bcomp.gui = True [ui] merge = bcomp

    Read the article

  • SSIS: Update a RecordSet passed into a VB.NET ScriptTask

    - by Zambouras
    What I am trying to accomplish is using this script task to continually insert into a generated RecordSet I know how to access it in the script however I do not know how to update it after my changes to the DataTable have been made. Code is Below: Dim EmailsToSend As New OleDb.OleDbDataAdapter Dim EmailsToSendDt As New DataTable("EmailsToSend") Dim CurrentEmailsToSend As New DataTable Dim EmailsToSendRow As DataRow EmailsToSendDt.Columns.Add("SiteMgrUserId", System.Type.GetType("System.Integer")) EmailsToSendDt.Columns.Add("EmailAddress", System.Type.GetType("System.String")) EmailsToSendDt.Columns.Add("EmailMessage", System.Type.GetType("System.String")) EmailsToSendRow = EmailsToSendDt.NewRow() EmailsToSendRow.Item("SiteMgrUserId") = siteMgrUserId EmailsToSendRow.Item("EmailAddress") = siteMgrEmail EmailsToSendRow.Item("EmailMessage") = EmailMessage.ToString EmailsToSend.Fill(CurrentEmailsToSend, Dts.Variables("EmailsToSend").Value) EmailsToSendDt.Merge(CurrentEmailsToSend, True) Basically my goal is to create a single row in a new data table. Get the current record set, merge the results so I have my result DataTable. Now I just need to update the ReadWriteVariable for my script. Do not know if I have to do anything special or if I can just assign it directly to the DataTable I.E. Dts.Variables("EmailsToSend").Value = EmailsToSendDt Thanks for the help in advanced.

    Read the article

  • How to read a column in a tab-delimited Excel file in C#?

    - by janejanejane
    I have some data of an excel file below. At first, I figured that reading from the file could be done using the Excel Library then using an OLEDB connection. I managed to get the DocumentNo column data with the OLEDB approach. However, when the excel file is closed, I am unable to do the operation because it gives an error that "External table is not in the expected format." How can I read from the file even if it is closed? 10/4/2010 Paid Documents for Document Tracking - Customer 1 Paid Documents for Document Tracking - Customer CoCd Customer Trans.type SG Clearing Clrng doc. Assignment Year DocumentNo Pstng Date Doc. Date Entry Dte Crcy PLDT 5000007 4 4 1/15/2010 25003413 5000007 2010 408000139 1/7/2010 1/5/2010 1/12/2010 PHP PLDT 5000007 4 4 1/15/2010 25003634 5000007 2010 408000068 1/5/2010 12/22/2009 1/10/2010 PHP

    Read the article

  • Challege: merging csv files intelligently!

    - by Evenz495
    We are in the middle of changing web store platform and we need to import products' data from different sources. We currently have several different csv files from different it systems/databases because each system is missing some information. Fortunatly the product ids are the same so it's possible to relate the data using ids. We need to merge this data into one big csv file so we can import in into our new e-commerce site. My question: is there a general approach when you need to merge csv files with related data into one csv file? Are there any applications or tools that helps you out?

    Read the article

  • Scanning FedEx labels into a visual basic program

    - by 0bfus
    I have a program that has tracking numbers entered into it and it stores them in a database and i would like to add a scanner to the mix. The scanner is acting like a keyboard and scannes just fine, the problem is scanning a FedEx number gives a 32 digit number and i am only iterested in the tracking number portion of it. I saw an example on the web for an excel macro and tried to modify it but it doesnt seem to work. Is there something i could do to just get the 12 digit number? This is what it looks like so far. Any help would be greatly appreciated. With BarcodeText If .Text < "1z99999999999999999" Then .Text = Mid(.Text, 17, 12) End If

    Read the article

  • Microsoft SQL Server 2005/2008 SSIS are oversized

    - by Ice
    In this case i'm old style and loved 'my fathers DTS' from SQL 2000. Most of the cases i have to import a flatfile into a table. In a second step i use some procedures (with the new MERGE-Statement) to process the imported content. For Export, i define a export-table and populate it with a store proc (containing a MERGE-Statement) and in a second step the content will be exported to a flat file. In some cases there is no flat file because there is annother sql-server or in rare cases an ODBC-Connection to a sybase or similar. What do you think? When it comes to complex ETL-Stuff the SSIS may be the right tool...but i haven't seen such a case yet.

    Read the article

  • how to generate large image in compact framework

    - by Buthrakaur
    I need to generate large images (A4 image at 200 DPI, PNG format would be fine) in my compact framework application. This is impossible to do in standard way due to memory limitations (such big image will throw OOMException). Is there any library which offers file-backed stream image generation? Or I could generate many smaller stripes of images (each stripe representing a row of the large image) using standard Bitmap approach, but I need to merge them together afterwards - is there any method how to merge many smaller images into one large without having to instantiate large Bitmap instance (which would again cause OOM)?

    Read the article

  • d3 tree - parents having same children

    - by Larry Anderson
    I've been transitioning my code from JIT to D3, and working with the tree layout. I've replicated http://mbostock.github.com/d3/talk/20111018/tree.html with my tree data, but I wanted to do a little more. In my case I will need to create child nodes that merge back to form a parent at a lower level, which I realize is more of a directed graph structure, but would like the tree to accomodate (i.e. notice that common id's between child nodes should merge). So basically a tree that divides like normal on the way from parents to children, but then also has the ability to bring those children nodes together to be parents (sort of an incestual relationship or something :)). Asks something similar - How to layout a non-tree hierarchy with D3 It sounds like I might be able to use hierarchical edge bundling in conjunction with the tree hierarchy layout, but I haven't seen that done. I might be a little off with that though.

    Read the article

  • Microsoft Team System Equivalent stack.

    - by Nix
    I am looking for a free alternative to TS. What would be the best alternative stack(source control, bug tracking, project management/planning, wiki, automated builds (ci))? Keeping in mind that it would be nice if they all integrated well. For example, it would be nice to be able to link bugs to source control, and then be able to link to a project plan and then be able to automate building. I do not have issues with using Microsoft project to manage project planing. I know i would like to use these....: SVN TeamCity NUnit But i am struggling to find a good Wiki/Project Planning/Bug tracking, that would integrate well. Any questions let me know.

    Read the article

  • How can I "interconnect" two sockets in Linux?

    - by Vi
    There are two connected sockets. How can I interconnect them? Data appeared on the one socket should be written to the other. EOF/FIN should propogate well. If one is half-closed, the other should also be half-closed. int client = get_connected_client_socket(); int proxy = get_connected_proxy_socket(); negotiate_with_proxy(proxy); iterconnect(client, proxy); // Now forgot about both client and proxy. // System should handle IO/shutdown/close. // Ideally even without any support of the user-space process. Can Linux do it? Can it be done by tricking connection tracking to change tracking status of existing connection? @related http://stackoverflow.com/questions/2673975/determine-how-much-can-i-write-into-a-filehandle-copying-data-from-one-fh-to-the

    Read the article

  • PHP: How to overwrite values in one array with values from another without adding new keys to the ar

    - by Svish
    I have an array with default settings, and one array with user-specified settings. I want to merge these two arrays so that the default settings gets overwritten with the user-specified ones. I have tried to use array_merge, which does the overwriting like I want, but it also adds new settings if the user has specified settings that doesn't exist in the default ones. Is there a better function I can use for this than array_merge? Or is there a function I can use to filter the user-specified array so that it only contains keys that also exist in the default settings array? (PHP version 5.3.0) Example of what I want $default = array('a' => 1, 'b' => 2); $user = array('b' => 3, 'c' => 4); // Somehow merge $user into $default so we end up with this: Array ( [a] => 1 [b] => 3 )

    Read the article

  • Mercurial Branching Model for task features

    - by Stan
    My development env: Windows 7, TortoiseHg, ASP.NET 4.0/MVC3 Test branch: code on test server Prod branch: code on production server This is my current branching model. The reason to branch out every task (feature) is because some features go to live slower. So in above graph, task 1 finished earlier (changeset #5), and merge into test branch for testing. However, due to bug or modification of original request, changesets #10, #12 have been made. While task 2 has finished testing #8 and pushed to live #9 already. My problem is every time when modifying task branch (like #10, #12), I have to do another merge to test branch (#11, #13), this makes the graph very messy. Is there any way to solve this issue? Or any better branching model?

    Read the article

  • Merging tables in MySQL - sum up columns

    - by Alan Williamson
    I have an interesting problem, that i am sure has a simple answer, but i can't seem to find it in the docs. I have two separate database tables, on different servers. They are both identical table schema with the same primary keys. I want to merge the tables together on one server. But, if the row on Server1.Table1 exists in Server2.Table2 then sum up the totals in the columns i specify. Table1{ column_pk, counter }; "test1", 3 "test2", 4 Table2{ column_pk, counter }; "test1", 5 "test2", 6 So after i merge i want: "test1",8 "test2",10 Basically i need to do a mysqldump but instead of it kicking out raw INSERT statements, i need to do a INSERT..ON DUPLICATE KEY UPDATE statements. What are my options? Appreciate any input, thank you

    Read the article

  • How to catch-up named mercurial branch from default branch without merging the two into one?

    - by Dynite
    I have two branches in mercurial.. default named |r1 |r2 |r3 -------- named branch created here. | |r4 | |r5 | r6 | | |r7 | | -----------> | r8 How do I achieve this catch-up? | | I want to update the named branch from default, but I'm not ready to merge the branches yet. How do I achieve this? Edit: Additionally, what would the operation be using the GUI? Is it.. right-click r6, merge with..., r8,... then what? commit to named branch?

    Read the article

  • How to get rid of bogus changes in git?

    - by zaza
    I'm a happy user of PortableGit 1.7.0.2. Today I wanted to pull a project changes from GitHub.com repository, so I did git pull. It failed with the following message: error: Your local changes to 'main.rb' would be overwritten by merge. Aborting.. I didn't care about the local changes so I typed git reset --hard HEAD (git clean from here didn't help neither), but it didn't work. When asked for git status I was still able to see the file as modified. git diff showed me that each line of the file has been modified, while git diff -b showed no differences at all, so I guess this is a line ending issue. Which is strange because the code is only pushed from Windows machines. Anyway, the question is: how can I ignore the local, bogus changes and merge with the latest changes from the remote repository?

    Read the article

  • SQL Database application written in Visual Basic in Visual Studio 2010

    - by TID
    Hello, I am writing a program and part of it is to store the keyed entry in a database on an sql server when the enter key is hit. The database is small and simple with only three entry spots in the table. one for the tracking number, one for the date entered, and one for the time entered. The only thing the user will see is the tracking number text box and an enter button. the date and time will be auto entered when the enter key is hit. i am relatively new to databases so i cannot figure out how to send the data to the database. The database is already configured, just need to get the program and the database talking to eachother. Thanks.

    Read the article

  • How do I search git history for a disappeared line?

    - by skiphoppy
    I need to search the history of a file in a git repository to find a line that is gone. The commit message will not have any relevant text to search on. What command do I use? Further details: this is the history of my todo list out of our non-stellar task tracking software. I've been keeping it for two years because there's just not enough information kept for me in the software. My commit messages usually have only the task ids, unfortunately, and what I need to do is find a closed task by subject, not by number. Yes, the real solution is better task tracking software, but that is completely out of my hands.

    Read the article

  • Min-Ordered Bionomial Heap Insertion java

    - by Charodd Richardson
    Im writing a java code to make a min-ordered Binomial Heap and I have to Insert and Remove-min. I'm having a very big problem inserting into the Heap. I have been stuck on this for a couple of days now and it is due tomorrow. Whenever I go to insert, It only prints out the item I insert instead of the whole tree (which is in preorder). Such as if I insert 1 it prints (1) and then I go to insert 2 it prints out (2) instead of (1(2)) It keeps printing out only the number I insert last instead of the whole preordered tree. I would be very grateful if someone could help me with this problem. Thank you so much in advance, Here is my code. public class BHeap { int key; int degree;//The degree(Number of children) BHeap parent, leftmostChild, rightmostChild, rightSibling,root,previous,next; public BHeap(){ key =0; degree=0; parent =null; leftmostChild=null; rightmostChild=null; rightSibling=null; root=null; previous=null; next=null; } public BHeap merge(BHeap x, BHeap y){ BHeap newHeap = new BHeap(); y.rightSibling=x.root; BHeap currentHeap = y; BHeap nextHeap = y.rightSibling; while(currentHeap.rightSibling !=null){ if(currentHeap.degree==nextHeap.degree){ if(currentHeap.key<nextHeap.key){ if(currentHeap.degree ==0){ currentHeap.leftmostChild=nextHeap; currentHeap.rightmostChild=nextHeap; currentHeap.rightSibling=nextHeap.rightSibling; nextHeap.rightSibling=null; nextHeap.parent=currentHeap; currentHeap.degree++; } else{ newHeap = currentHeap; newHeap.rightmostChild.rightSibling=nextHeap; newHeap.rightmostChild=nextHeap; nextHeap.parent=newHeap; newHeap.degree++; nextHeap.rightSibling=null; nextHeap=newHeap.rightSibling; } } else{ if(currentHeap.degree==0){ nextHeap.rightmostChild=currentHeap; nextHeap.rightmostChild.root = nextHeap.rightmostChild;//add nextHeap.leftmostChild=currentHeap; nextHeap.leftmostChild.root = nextHeap.leftmostChild;//add currentHeap.parent=nextHeap; currentHeap.rightSibling=null; currentHeap.root=currentHeap;//add nextHeap.degree++; } else{ newHeap=nextHeap; newHeap.rightmostChild.rightSibling=currentHeap; newHeap.rightmostChild=currentHeap; currentHeap.parent= newHeap; newHeap.degree++; currentHeap=newHeap.rightSibling; currentHeap.rightSibling=null; } } } else{ currentHeap=currentHeap.rightSibling; nextHeap=nextHeap.rightSibling; } } return y; } public void Insert(int x){ /*BHeap newHeap = new BHeap(); newHeap.key=x; if(this.root==null){ this.root=newHeap; return; } else{ this.root=merge(newHeap,this.root); }*/ BHeap newHeap= new BHeap(); newHeap.key=x; if(this.root==null){ this.root=newHeap; } else{ this.root = merge(this,newHeap); }} public void RemoveMin(){ BHeap newHeap = new BHeap(); BHeap child = new BHeap(); newHeap=this; BHeap pos = newHeap.next; while(pos !=null){ if(pos.key<newHeap.key){ newHeap=pos; } pos=pos.rightSibling; } pos=this; BHeap B1 = new BHeap(); if(newHeap.previous!=null){ newHeap.previous.rightSibling=newHeap.rightSibling; B1 =pos.leftmostChild; B1.rightSibling=pos; pos.leftmostChild=pos.rightmostChild.leftmostChild; } else{ newHeap=newHeap.rightSibling; newHeap.previous.rightSibling=newHeap.rightSibling; B1 =pos.leftmostChild; B1.rightSibling=pos; pos.leftmostChild=pos.rightmostChild.leftmostChild; } merge(newHeap,B1); } public void Display(){ System.out.print("("); System.out.print(this.root.key); if(this.leftmostChild != null){ this.leftmostChild.Display(); } System.out.print(")"); if(this.rightSibling!=null){ this.rightSibling.Display(); } } }

    Read the article

  • The 35 Best Tips and Tricks for Maintaining Your Windows PC

    - by Lori Kaufman
    When working (or playing) on your computer, you probably don’t think much about how you are going to clean up your files, backup your data, keep your system virus free, etc. However, these are tasks that need attention. We’ve published useful article about different aspects of maintaining your computer. Below is a list our most useful articles about maintaining your computer, operating system, software, and data. HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now HTG Explains: Why Linux Doesn’t Need Defragmenting

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >