Search Results

Search found 3767 results on 151 pages for 'workflow foundation'.

Page 13/151 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Configuring Team Foundation Server Basic on Home Server.

    - by Enrique Lima
    For the installation I selected only the Team Foundation Server role. Then, I opened the Team Foundation Server Administration Console (which I think is a great addition and improvement over the way TFS was configured in the past) to proceed with the configuration of the pieces. Once I selected the Configure Installed Features, the Configuration Center opened up. Now, the choices … In my implementation here I just want to take advantage of Source Control primarily.  I want to be able to store my code and projects.  So, Basic it is! So, the Basic Configuration Wizard opens up.  Now the options to configure are very limited, but we have to provide details for the SQL Server Instance. And now, to select Install SQL Server express.  If you want to take advantage of another system in your environment to host your database, well you could Use an existing SQL Server Instance. Once it has the details it needs, you get a Summary view to confirm your choices. Once, you click next or verify, it runs readiness checks on your system to make sure the installation will have a successful pass.  And we love GREEN! Now, since got the green flag, our next stop is to let the wizard do its magic, click on Configure.  And once again, we love GREEN! We click Next, and … We like a big Green Success sign … We close the Configuration Center … First results … Web Access …  Nothing to show … but we are there! And all this running from a Microsoft Home Server installation.

    Read the article

  • SharePoint 2007 Hosting :: How to Move a Document from One Lbrary to Another

    - by mbridge
    Moving a document using a SharePoint Designer workflow involves copying the document to the SharePoint document library you want to move the document to, and then deleting the document from the current document library it is in. You can use the Copy List Item action to copy the document and the Delete item action to delete the document. To create a SharePoint Designer workflow that can move a document from one document library to another: 1. In SharePoint Designer 2007, open the SharePoint site on which the document library that contains the documents to move is located. 2. On the Define your new workflow screen of the Workflow Designer, enter a name for the workflow, select the document library you want to attach the workflow to (this would be a document library containing documents to move), select Allow this workflow to be manually started from an item, and click Next. 3. On the Step 1 screen of the Workflow Designer, click Actions, and then click More Actions from the drop-down menu. 4. On the Workflow Actions dialog box, select List Actions from the category drop-down list box, select Copy List Item from the actions list, and click Add. The following text is added to the Workflow Designer: Copy item in this list to this list 5. On the Step 1 screen of the Workflow Designer, click the first this list (representing the document library to copy the document from) in the text of the Copy List Item action. 6. On the Choose List Item dialog box, leave Current Item selected, and click OK. 7. On the Step 1 screen of the Workflow Designer, click the second this list (representing the document library to copy the document to) in the text of the Copy List Item action, and select the document library (this is the document library to where you want to move the document) from the drop-down list box that appears. 8. On the Step 1 screen of the Workflow Designer, click Actions, and then click More Actions from the drop-down menu. 9. On the Workflow Actions dialog box, select List Actions from the category drop-down list box, select Delete Item from the actions list, and click Add. The following text is added to the Workflow Designer: then Delete item in this list 10. On the Step 1 screen of the Workflow Designer, click this list in the text of the Delete Item action. 11. On the Choose List Item dialog box, leave Current Item selected and click OK. The final text for the workflow should now look like: Copy item in DocLib1 to DocLib2   then Delete item in DocLib1 where DocLib1 is the SharePoint document library containing the document to move and DocLib2 the document library to move the document to. 12. On the Step 1 screen of the Workflow Designer, click Finish. How to Test the Workflow? 1. Go to the SharePoint document library to which you attached the workflow, click on a document, and select Workflows from the drop-down menu. 2. On the Workflows page, click the name of your SharePoint Designer workflow. 3. On the workflow initiation page, click Start.

    Read the article

  • BPM 11g and Human Workflow Shadow Rows by Adam Desjardin

    - by JuergenKress
    During the OFM Forum last week, there were a few discussions around the relationship between the Human Workflow (WF_TASK*) tables in the SOA_INFRA schema and BPMN processes.  It is important to know how these are related because it can have a performance impact.  We have seen this performance issue several times when BPMN processes are used to model high volume system integrations without knowing all of the implications of using BPMN in this pattern. Most people assume that BPMN instances and their related data are stored in the CUBE_*, DLV_*, and AUDIT_* tables in the same way that BPEL instances are stored, with additional data in the BPM_* tables as well.  The group of tables that is not usually considered though is the WF* tables that are used for Human Workflow.  The WFTASK table is used by all BPMN processes in order to support features such as process level comments and attachments, whether those features are currently used in the process or not. For a standard human task that is created from a BPMN process, the following data is stored in the WFTASK table: One row per human task that is created The COMPONENTTYPE = "Workflow" TASKDEFINITIONID = Human Task ID (partition/CompositeName!Version/TaskName) ACCESSKEY = NULL Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • « Google Wave » devient « Apache Wave », la Apache Software Foundation accepte officiellement de reprendre le projet

    « Google Wave » devient « Apache Wave » La Apache Software Foundation accepte officiellement de reprendre le projet Mise à jour du 09/12/10 La fondation Apache et Google viennent de le confirmer, Wave intègre officiellement le programme d'incubation de la fondation après que celle-ci a finalement donné son feu vert pour la migration (lire ci-avant). Le comité de pilotage du projet est modifié en conséquence, avec notamment l'arrivée de deux membre de Novell (également très impliqué ...

    Read the article

  • Mercurial Conversion from Team Foundation Server

    - by mhawley
    I’m using Twitter. Follow me @matthawley One of my many (almost) daily tasks when working on the CodePlex platform since releasing Mercurial as a supported version control system, is converting projects from Team Foundation Server (TFS) to Mercurial. I'm happy to say that of all the conversions I have done since mid-January, the success rate of migrating full source history is about 95%. To get to this success point, I have had to learn and refine several techniques utilizing a few different tools… (read more)

    Read the article

  • The Linux Foundation Store: Linux gets silly

    <b>Cyber Cynic:</b> "...the Linux Foundation, the non-profit organization dedicated to growing Linux, has launched a new Linux merchandise store featuring a line of exclusive and original T-shirts, hats, mugs and other items that reflect "geek culture.""

    Read the article

  • Kauffman Foundation Selects Stackify to Present at Startup@Kauffman Demo Day

    - by Matt Watson
    Stackify will join fellow Kansas City startups to kick off Global Entrepreneurship WeekOn Monday, November 12, Stackify, a provider of tools that improve developers’ ability to support, manage and monitor their enterprise applications, will pitch its technology at the Startup@Kauffman Demo Day in Kansas City, Mo. Hosted by the Ewing Marion Kauffman Foundation, the event will mark the start of Global Entrepreneurship Week, the world’s largest celebration of innovators and job creators who launch startups.Stackify was selected through a competitive process for a six-minute opportunity to pitch its new technology to investors at Demo Day. In his pitch, Stackify’s founder, Matt Watson, will discuss the current challenges DevOps teams face and reveal how Stackify is reinventing the way software developers provide application support.In October, Stackify had successful appearances at two similar startup events. At Tech Cocktail’s Kansas City Mixer, the company was named “Hottest Kansas City Startup,” and it won free hosting service after pitching its solution at St. Louis, Mo.’s Startup Connection.“With less than a month until our public launch, events like Demo Day are giving Stackify the support and positioning we need to change the development community,” said Watson. “As a serial technology entrepreneur, I appreciate the Kauffman Foundation’s support of startup companies like Stackify. We’re thrilled to participate in Demo Day and Global Entrepreneurship Week activities.”Scheduled to publicly launch in early December 2012, Stackify’s platform gives developers insights into their production applications, servers and databases. Stackify finally provides agile developers safe and secure remote access to look at log files, config files, server health and databases. This solution removes the bottleneck from managers and system administrators who, until now, are the only team members with access. Essentially, Stackify enables development teams to spend less time fixing bugs and more time creating products.Currently in beta, Stackify has already been named a “Company to Watch” by Software Development Times, which called the startup “the next big thing.” Developers can register for a free Stackify account on Stackify.com.###Stackify Founded in 2012, Stackify is a Kansas City-based software service provider that helps development teams troubleshoot application problems. Currently in beta, Stackify will be publicly available in December 2012, when agile developers will finally be able to provide agile support. The startup has already been recognized by Tech Cocktail as “Hottest Kansas City Startup” and was named a “Company to Watch” by Software Development Times. To learn more, visit http://www.stackify.com and follow @stackify on Twitter.

    Read the article

  • Accenture Foundation Platform for Oracle

    - by Michelle Kimihira
    Accenture Foundation Platform for Oracle (AFPO) helps clients accelerate deployments on Oracle Fusion Middleware. Version 5 is the latest version and added exciting new capabilities including: integration with Oracle Application Development Framework (ADF) Mobile and Oracle WebCenter. For more information, visit Accenture's Website. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Facebook C# SDK submitted to the Outercurve Foundation

    - by The Official Microsoft IIS Site
    I am pleased to announce another open source milestone as we continue to deliver on our commitment to Interoperability: today, the Facebook C# SDK was submitted to the Outercurve Foundation’s Data, Languages, and Systems Interoperability gallery. This project is a set of libraries that enables developers of all Microsoft platforms, as well as Mono, to build applications that integrate with Facebook. The project contains core libraries for authentication and calling Facebook APIs. Additionally...(read more)

    Read the article

  • Can somebody help me install this jBPM based workflow management suite?

    - by Eternal Saint
    1.Its a book Workflow Interface software available at sourceforge http://bookworkflowint.sourceforge.net/ Any instructions on installing and configuring it would be great especially in windows, however I can try Linux specific ones as well. I could not find any installation instructions. I posted this in stackoverflow by mistake and was directed here. 2.Can you guys suggest any good scanning/digitization workflow software (document imaging) that I can adopt to my scanner software? Infact even a simple one would do may be based on hotfolders. I just want to be able to track the uniqueid/barcode of the scanned book, its status so that its not scanned again. It books or manuscripts could be in millions page count. I thought of using some kind of generic bug tracking tool, just track a few fields, dont know if its the right choice Thank you very much

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Objective-C: Getting the True Class of Classes in Class Clusters

    - by TechZen
    Recently while trying to answer a questions here, I ran some test code to see how Xcode/gdb reported the class of instances in class clusters. (see below) In the past, I've expected to see something like: PrivateClusterClass:PublicSuperClass:NSObject Such as this (which still returns as expected): NSPathStore2:NSString:NSObject ... for a string created with +[NSString pathWithComponents:]. However, with NSSet and subclass the following code: - (void)applicationDidFinishLaunching:(UIApplication *)application { NSSet *s=[NSSet setWithObject:@"setWithObject"]; NSMutableSet *m=[NSMutableSet setWithCapacity:1]; [m addObject:@"Added String"]; NSMutableSet *n = [[NSMutableSet alloc] initWithCapacity:1]; [self showSuperClasses:s]; [self showSuperClasses:m]; [self showSuperClasses:n]; [self showSuperClasses:@"Steve"]; } - (void) showSuperClasses:(id) anObject{ Class cl = [anObject class]; NSString *classDescription = [cl description]; while ([cl superclass]) { cl = [cl superclass]; classDescription = [classDescription stringByAppendingFormat:@":%@", [cl description]]; } NSLog(@"%@ classes=%@",[anObject class], classDescription); } ... outputs: // NSSet *s NSCFSet classes=NSCFSet:NSMutableSet:NSSet:NSObject //NSMutableSet *m NSCFSet classes=NSCFSet:NSMutableSet:NSSet:NSObject //NSMutableSet *n NSCFSet classes=NSCFSet:NSMutableSet:NSSet:NSObject // NSString @"Steve" NSCFString classes=NSCFString:NSMutableString:NSString:NSObject The debugger shows the same class for all Set instances. I know that in the past the Set class cluster did not return like this. What has changed? (I suspect it is a change in the bridge from Core Foundation.) What class cluster report just a generic class e.g. NSCFSet and which report an actual subclass e.g. NSPathStore2? Most importantly, when debugging how do you determine the actual class of a NSSet cluster instance?

    Read the article

  • How do I create a reusable WF sequential workflow?

    - by djgiard
    I have two customers that have the same workflow (Create file -transport file - wait for response - send response to internal team); however the implementation of each step is different for each customer. For example, one customer requires a flat file to be sent via SFTP, while the other customer requires an XML file to be sent via FTP. I'd like to create a sequential workflow, using Microsoft Workflow Foundation (WF) and reuse this workflow for multiple vendors. Each action's call to an external module can use the same interface, but a different concrete implementation. However, I'm unfamiliar with WF and I'm not sure how to implement this. Can someone point me to the proper way to use this pattern? Will it make a difference whether I choose WF 3.5 or WF 4.0? Thank you.

    Read the article

  • TFS 2010: Is MSBuild going to be dead because of Windows Workflow?

    - by afsharm
    MSBuild in TFS 2010 has been replaced by Windows Workflow 4.0. It means when you are creating a Build Definition, you won't have a TFSBuild.proj to edit instead you must edit a workflow to customize your build. BTW am I correct if I say Microsoft is not supporting MSBuild in TFS 2010 and learning MSBuild as a TFS 2010 Team Build administrator doesn't worth? And another more question: Is microsoft going to replace Visual Studio Projects' language from MSBuild to something like Windows Workflow? Many Thanks

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >