Search Results

Search found 4474 results on 179 pages for 'git workflow'.

Page 82/179 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Can Gitosis enforce correct user name/email?

    - by koumes21
    Gitosis is able to authenticate users based on public/private key pair. It is able to find out which user is currently committing. However, the user name and email is taken from the client's Git configuration ('git config user.name' etc.), which can be set to arbitrary values. Is there any way to associate user names and emails with their public keys and then make Gitosis uses these names and emails as the name and email of the committer? I do not care if I will use Gitosis or WebDAV or some other alternative to share the repository. It just seems to me that none of the available methods supports this enforcement of using some kind of "correct" user name and email. If there exists some alternative, please tell me about it.

    Read the article

  • Version control for subtitle creation

    - by user3635
    We make subtitles for a TV series and I plan to use a VCS for it. The structure of project directory is like this: series/ episode1/nameofepisode1.str episode2/nameofepisode2.str episode3/nameofepisode3.str ... Question: When I finish subtitle of an episode, I want to assign release tag for this episode (episode1_v1). I wanted to use git for this, but in git tag is assigned only to the whole repository. What to do, so that I can view every episode progress separately? Maybe there are some more suitable VCS for this?

    Read the article

  • Code reviews for larger ASP.NET MVC team using TFS

    - by Parrots
    I'm trying to find a good code review workflow for my team. Most questions similar to this on SO revolve around using shelved changes for the review, however I'm curious about how this works for people with larger teams. We usually have 2-3 people working a story (UI person, Domain/Repository person, sometimes DB person). I've recommended the shelf idea but we're all concerned about how to manage that with multiple people working the same feature. How could you share a shelf between multiple programmers at that point? We worry it would be clunky and we might easily have unintended consequences moving to this workflow. Of course moving to shelfs for each feature avoids having 10 or so checkins per feature (as developers need to share code) making seeing the diffs at code review time painful. Has anyone else been able to successfully deal with this? Are there any tools out there people have found useful aside from shelfs in TFS (preferably open-source)?

    Read the article

  • DVCS - What's the downside of rewriting unpublished history?

    - by user1447278
    I was wondering what in particular is the downside of "losing history" in a development process. One famous example is of course git rebase -i / git merge --squash, but also what is described here: http://fourkitchens.com/blog/2009/04/20/alternatives-rebasing-bazaar under "I want to clean up my commit history prior to submitting my changes to the mainline." I can see that exporting patches and applying them to another branch would lose the "history" of the branch, but why would that branch and its commit history be useful after it has been merged? Can someone elaborate on why such techniques are considered "dirty"? Why does it matter in which order changes were originally committed in the first place as long as they can be applied to the main branch?

    Read the article

  • changing the last commit message without committing newest changes

    - by Oleg2718281828
    My ideal workflow would consist of the following steps edit the code compile git commit -a -m "commit message" start running the new binaries, tests, etc. (may take 10+ minutes) start new changes, while the binaries are still running when step # 4 is finished, edit the commit message from step # 3, without committing the changes introduced in step # 5, by adding, say, "test FOO failed" I cannot use git commit -a --amend -m "new commit message", because this commits the new changes as well. I'm not sure that I want to bother with staging or branching. I wish I could just edit the commit message without committing any new changes. Is it possible?

    Read the article

  • GitHub: searching through older versions of files

    - by normski
    I know that using GitHub I can search through all the current versions of my files in a repo. However, I would also like to search through the older versions of my repo files. For example, say, I used to have a function called get_info() in my code, but deleted it several versions ago, is it possible to search for get_info and find the code. If it is not possible using GitHub, is it possible from the git command line? EDIT Thanks to @Mark Longair for showing how this can be done from the git command line. If it's not possible in GitHub it would be a great feature to have.

    Read the article

  • what to use for repetitive (daily, weekly, monthly) tasks ? Workflows, Windows Services, something e

    - by mare
    I've been writing Windows Services for a while and they always seem to work fine for things that need to run every day, few times a week, once a month, etc. but I've been lately thinking about going with Windows Workflow Foundation. However, I am unsure how would they run on a server without some container application (for instance SharePoint)? I worked with Sharepoint workflows before and I always had huge problems, at first with the bugs in the workflow architecture implementation (the problems with sleep and delay) and later when they eventually started to work, they were difficult to manage and change. On the other hand Windows Services were always quite easy to implement, easy to create a setup for them and install them and they were always quite resilient (they were often working for months without crashing or something else going wrong). What do you recommend? Please bear in mind we are working in .NET (version is of no problem, if 4.0 brings something new on this subject, we can use it).

    Read the article

  • My company is a Rackspace Cloud client (provided to us for free) and I'm trying to find some way to set up version control

    - by Nick S.
    As the title says, my (small) business is provided a free Rackspace Cloud client account. We receive a decent amount of traffic but I haven't been able to put together a business case to move to our own server yet. However, we are developing some complex apps and I'm frustrated with not having the ability to even ssh into the remote server. Ultimately, I'd like to set up some sort of version control (at this point, I'll take anything, git or otherwise). I have control over databases, can FTP, set up cron jobs, and perform a few other basic functions. I can't think of any way to set up git or something similar without ssh access. A thought went through my mind that maybe some sort of PHP version control exists that I might be able to set up, but I haven't had any luck finding it yet. Do you guys have any ideas, thoughts, or advice?

    Read the article

  • Sycronizing/deploying scripts across several systems

    - by otto
    I have a few time consuming tasks that I like to spread across several computers. These tasks require running an identical ruby or python script (or series of scripts that call each other) on each machine. The machines will a separate config file telling the script what portion of the task to complete. I want to figure out the best way to syncronize the scripts on these machines prior to running them. Up until now, I have been making changes to a copy of the script on a network share and then copying a fresh copy to each machine when I want to run it. But this is cumbersome and leaves a chance for error ( e.g missing a file on the copy or not clicking "copy and replace"). Lets assume the systems are standard windows machines that are not dedicated to this task and I don't need to run these scripts all the time (so I don't want a solution that runs 24/7 and always keeps them up to date, I'd prefer something that pushes/pulls on command). My thoughts on various options: Simple adaptation of my current workflow: Keep the originals on the network drive, but write a batch file that copies over the latest version of the scripts so everything is a one-click operation. Requires action on each system, but that's not the end of the world (since each one usually needs their configuration file changed slightly too). Put everything in a Mercurial/Git reposotory and pull a fresh copy onto each node. Going straight to the repo from each machine would guarantee a current version (and would have the fringe benefit of allowing edits to the script to be made from any machine). Cons would be that it requires VCS to be installed on each machine and there might be some pains dealing with authentication since I wouldn't use a public repo. Open up write access on a shared folder and write a script to use rsync (or similar) to push the changes out to all of the machines at once. This gets a current version on every machine (though you would have to change the script if you want to omit a machine or add a new one). Possible issue would be that each computer has to allow write access. Dropbox is a reasonable suggestion (and could work well) but I dont want to use an external service and I'd prefer not to have to have dropbox running 24/7 on systems that would normally not need it. Is there something simple that I am missing? Some tool designed expressly for doing this kind of thing? Otherwise I am leaning toward just tying all of the systems into Mercurial since, while it requires extra software, it is a little more robust than writing a batch file (e.g. if I split part of a script into a separate module, Mercurial will know what to do whereas I would have to add a line to the batch file).

    Read the article

  • Mercurial to Mercurial to Subversion Workflow Problem

    - by Dalroth
    We're migrating from Subversion to Mercurial. To facilitate the migration, we're creating an intermediate Mercurial repository that is a clone of our Subversion repository. All developers will be begin switching over to the Mercurial repository, and we'll periodically push changes from the intermediate Mercurial repository to the existing Subversion repository. After a period of time, we'll simply obsolete the Subversion repository and the intermediate Mercurial repository will become the new system of record. Dev 1 Local --+--> Mercurial --+--> Subversion Dev 2 Local --+ + Dev 3 Local --+ + Dev 4 -------------------------+ I've been testing this out, but I keep running into a problem when I push changes from my local repository, to the intermediate Mercurial repository, and then up into our Subversion repository. On my local machine, I have a changeset that is committed and ready to be pushed to our intermediate Mercurial repository. Here you can see it is revision #2263 with hash 625... I push only this changeset to the remote repository. So far, everything looks good. The changeset has been pushed. hg update 1 files updated, 0 files merged, 0 files removed, 0 files unresolved I now switch over to the remote repository, and update the working directory. hg push pushing to svn://... searching for changes [r3834] bmurphy: database namespace pulled 1 revisions saving bundle to /srv/hg/repository/.hg/strip-backup/62539f8df3b2-temp adding branch adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files rebase completed Next, I push the change up to Subversion, works great. At this point, the change is in the Subversion repository and I return attention back to my local client. I pull changes to my local machine. Huh? I've now got two changesets. My original changeset appears as a local branch now. The other changeset has a new revision number 2264, and a new hash 10c1... Anyway, I update my local repo to the new revision. I'm now switched over. So, I finally click the "determine and mark outgoing changesets" and as you can see Mercurial still wants to push out my previous changesets even though they've already been pushed. Clearly, I'm doing something wrong. I also can't merge the two revisions. If I merge the two revisions on my local machine, I end up with a "merge" commit. When I push that merge commit out to the intermediate Mercurial repository, I can no longer push changes out to our Subversion repository. I end up with the following problem: hg update 0 files updated, 0 files merged, 0 files removed, 0 files unresolved hg push pushing to svn://... searching for changes abort: Sorry, can't find svn parent of a merge revision. and I have to rollback the merge to get back to a working state. What am I missing?

    Read the article

  • Windows Workflow and sql script in declarative config like InRule

    - by Satish
    We have been using InRule for our Rule needs we have found that it does not scale well and so are investigating the Windows Work Flow. Within InRule we could configure pretty much have any task for example our sql scripts and stored procedures where all part of a separate rule config file, I am wondering if there is a similar functionality within windows work flow where I could just call a declarative task and pass it a bunch of parameters – This task should contain the sql script I would be executing , we should be able to change the script at runtime without recompilation to the WF code. Is this possible in Windows Work flow – How can I accomplish this within work flow. Additionally for sql execution within Work Flow, how does it get the connection string. Should it be passed from the calling program – is passing it as input parameter from the Calling app via the Dictionary object the best way or can the work flow code have visibility to my calling program app.config and get the connection string ?

    Read the article

  • Linq Query Help Needed

    - by Randy Minder
    Say I have the following LINQ queries: var source = from workflow in sourceWorkflowList select new { SubID = workflow.SubID, ReadTime = workflow.ReadTime, ProcessID = workflow.ProcessID, LineID = workflow.LineID }; var target = from workflow in targetWorkflowList select new { SubID = workflow.SubID, ReadTime = workflow.ReadTime, ProcessID = workflow.ProcessID, LineID = workflow.LineID }; var difference = source.Except(target); sourceWorkflowList and targetWorkflowList have the exact same column definitions. But they both contain more columns of data than what is shown in the queries above. Those are just the columns needed for this particular issue. difference contains all rows in sourceWorkflowList that are not contained in targetWorkflowList Now what I would like to do is to remove all rows from sourceWorkflowList that do not exist in difference. Could someone show me a query that would do this? Thanks very much - Randy

    Read the article

  • Can an arbitrary email adress be used in workflow send email activity

    - by Greg McGuffey
    I'm wondering if there is any way to be able to include an arbitrary email address as the To:, From:, CC: or BCC: fields of a send email activity? It appears that they must be contacts in the CRM. I ask this because I have a requirement to cc a known group email (no actual user associated with the email...something like [email protected] it's not a queue at all). I'm concerned that if I create a CRM user for this email, that when I move to production, I'll have to change all the workflows using this email to point to the CRM entity on the production box (assuming GUID is saved with activity). If an arbitrary email isn't possible, any other suggestions? Thanks!

    Read the article

  • Remmina 1.0 problems

    - by kamil
    I downloaded, compiled remmina and freerdp from the source repositories. Unlikely I am having troubles in RDP connections. When I initiate any RDP connection from Remmina, Remmina is closed immidialtely. I tried to open freerdp from terminal, it worked like a charm. I tried to open remmina from terminal to check errors. It says: segmentation fault - after connecting to any rdp connection I got the source from git: git clone git://github.com/FreeRDP/Remmina.git git clone git://github.com/FreeRDP/FreeRDP.git// Compilation is successfull with all dependencies. I tried to remove old remmina 0.9.9.1 with no chance I tried to reboot my machine and issue ldconfig with no chance I switched to other rdp clients right now. How can I be able to fix this? the old remmina was working well with RDP but causing sometimes high memory consumption (about 1GB of RAM) I am using Ubuntu 12.04.1 64bit

    Read the article

  • Need help with an AJAX workflow

    - by Anders
    Sorry I couldn't be more descriptive with the title, I will elaborate fully below: I have a web application that I want to implement some AJAX functionality into. Currently, it is running ASP.NET 3.5 with VB.NET codebehind. My current "problem" is I want to dynamically be able to populate a DIV when a user clicks an item on a list. The list item currently contains a HttpUtility.UrlEncode() (ASP.NET) string of the content that should appear in the DIV. Example: <li onclick="setFAQ('The+maximum+number+of+digits+a+patient+account+number+can+contain+is+ten+(10).');"> What is the maximum number of digits a patient account number can contain?</li> I can decode the string partially with the JavaScript function unescape() but it does not fully decode the string. I would much rather pass the JavaScript function the faq ID then somehow pull the information from the database where it originates. I am 99% sure it is impossible to call an ASP function from within a JavaScript function, so I am kind of stumped. I am kind of new to AJAX/ASP.NET so this is a learning experience for me.

    Read the article

  • asp.net mvc - reusing pages/controllers in workflow

    - by fregas
    I have 2 workflows: 1) the user signs up for the first time. They see 3 different screens, their basic user information, their credit card, and some additional profile information. They complete these 3 steps in a wizard like fashion, where each time they hit "submit" they leave the current screen and move on to the next. 2) the user already is signed up. He has links in the navigation to these 3 seperate pages. He can update them in any order. When he hits save, he doesn't leave the page he's on, it just shows something at the top that says "Credit Card Info saved..." or whatever. Possibly using ajax or maybe a full page refresh. I would like to reuse the code not only the view but also in the controller for these 3 screens between the two workflows, but without a ton of if...then logic to determine where to go next depending on whether its a first signup in the wizard or updating individual parts of a profile. Any ideas?

    Read the article

  • Verify my form workflow

    - by Shackrock
    I have a form, with some sensitive info (CC numbers). My work flow is: One page to take all form items Upon submission, values are validated. If all is well, all data is stored in a session variable, and the page reloads and displays this info from the session variable. If everything is ok on the review page, the user clicks submit and the session variable is sent to another form for processing (sending payment). Upon success, the session is destroyed. Upon failure (bad CC number, for example) - the user is sent back to the form, with all of the fields filled in just like before, so that they can check for errors and try again (session is NOT destroyed). Does anyone see anything wrong with this, from a security or best practices stand point? UPDATE I'm thinking I can get rid of a step - storing the info in a session EVER. Just have a one page checkout, no review page... makes sense.

    Read the article

  • Workflow for statistical analysis and report writing

    - by ws
    Does anyone have any wisdom on workflows for data analysis related to custom report writing? The use-case is basically this: Client commissions a report that uses data analysis, e.g. a population estimate and related maps for a water district. The analyst downloads some data, munges the data and saves the result (e.g. adding a column for population per unit, or subsetting the data based on district boundaries). The analyst analyzes the data created in (2), gets close to her goal, but sees that needs more data and so goes back to (1). Rinse repeat until the tables and graphics meet QA/QC and satisfy the client. Write report incorporating tables and graphics. Next year, the happy client comes back and wants an update. This should be as simple as updating the upstream data by a new download (e.g. get the building permits from the last year), and pressing a "RECALCULATE" button, unless specifications change. At the moment, I just start a directory and ad-hoc it the best I can. I would like a more systematic approach, so I am hoping someone has figured this out... I use a mix of spreadsheets, SQL, ARCGIS, R, and Unix tools. Thanks! PS: Below is a basic Makefile that checks for dependencies on various intermediate datasets (w/ ".RData" suffix) and scripts (".R" suffix). Make uses timestamps to check dependencies, so if you 'touch ss07por.csv', it will see that this file is newer than all the files / targets that depend on it, and execute the given scripts in order to update them accordingly. This is still a work in progress, including a step for putting into SQL database, and a step for a templating language like sweave. Note that Make relies on tabs in its syntax, so read the manual before cutting and pasting. Enjoy and give feedback! http://www.gnu.org/software/make/manual/html%5Fnode/index.html#Top R=/home/wsprague/R-2.9.2/bin/R persondata.RData : ImportData.R ../../DATA/ss07por.csv Functions.R $R --slave -f ImportData.R persondata.Munged.RData : MungeData.R persondata.RData Functions.R $R --slave -f MungeData.R report.txt: TabulateAndGraph.R persondata.Munged.RData Functions.R $R --slave -f TabulateAndGraph.R report.txt

    Read the article

  • Trying to determine best design for this workflow - c# - 3.0

    - by ltech
    Input Server - files of type jpg,tif, raw,png, mov come in via FTP Each file needs to be watermarked, if applicable, and meta data added to the file Then each file needs to be moved to an orders directory where an order file is generated and then packaged as zip file and moved to processing server. The file names are of [orderid_userid_guid].[jpg|tif|mov|png...] As I expect the volume to grow I dont want to work on one file at a time and move it through the work flow. I would prefer multi threaded/asynchronous if possible..

    Read the article

  • Lego-Style Cocoa Workflow Application

    - by Armin Ronacher
    Hi, I currently have to develop a system very similar to MIT's Scratch's UI. In case you don't know it, here a screenshot: http://kidconfidence.com/blogs/wp-content/uploads/2007/10/scratch1.png Basically you have bricks in the library on the left you can drop into the window on the right side. The problem I have is that I'm new to Cocoa and not sure what would be the best way to accomplish that. Because you can nest these bricks sometimes and other times stick them together I wonder if there is something that would help implementing that. I recognize this is not a very common interface that there are probably no implementations of that around, but maybe there are helpers for parts of this. Regards, Armin

    Read the article

  • FFMPEG compilation errors

    - by Nitin Sagar
    First of all i am a newbie to Ubuntu Linux and have been trying to install and compile FFMPEG on an Ubuntu machine... I am trying to compile FFMPEG on an Ubuntu machine, using the following link reference: https://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide I have already install git packages from resource centre whatever it results in search... Whatever i am trying to clone to is showing the below error... and please note that the network is wireless and connected with full bandwidth and i am able to browse through website and not sure why its showing an error as unable to connect and connection timed out.... root@ubuntu:~# cd root@ubuntu:~# git clone --depth 1 git://github.com/mstorsjo/fdk-aac.git Cloning into 'fdk-aac'... fatal: unable to connect to github.com: github.com[0: 207.97.227.239]: errno=Connection timed out Tried these commands as well to install x264 lib: cd git clone --depth 1 git://git.videolan.org/x264 cd x264 I am doing all this as a root user. Any help and comments would be appreciated. Thanks Nitin

    Read the article

  • Sencha touch 2/ app workflow with navigation view

    - by eplatonov
    I am trying to understand how can I implement the same functionality which provides navigation view in sencha touch 2, but .... Each item of the 'Ext.NavigationView' component should have it's own unique set of 'navigationBar' elements. I mean set of buttons, for example. I know that I can do something like this: this.getMain().getNavigationBar().rightBox.removeAll(); this.getMain().getNavigationBar().rightBox.add(this.getSettingButton()); //where 'getSettingButton' predefined by me a button And do this action each time when 'push' event happens (clear 'navigationBar' and add appropriate set of buttons) Of course, I even can implement 'Ext.Panel' with 'layout: card' and set of 'Ext.panel' elements in the 'items' property, each of which will be have unique 'toolbar'. To control the behavior I can use 'setActiveItem' method. But, I think each of these approaches is a bit weird, isn't it? I expected that would be much more natural approach to implement it. Most likely I don't know what I need. Confirm my doubts. What is the best way to do it.

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >