Search Results

Search found 4276 results on 172 pages for 'integration certificati'.

Page 23/172 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Configure Jenkins and Tomcat using Puppet

    - by ex3v
    I'm trying to setup Spring dev environment (jenkins, tomcat) on Vagrant. What I really want to achieve is to limit config to only Puppet scripts, so I can share it with my colleagues and work together on the same environment. So far I managed to set up simple scripts to install jenkins, tomcat and so on, they work fine. What about jenkins configuration though? I'm pretty green in jenkins usage and configuration, not sure if I'm doing it the right way... I found this article and I want to migrate whole setup described in it to Puppet. Any ideas? Thanks.

    Read the article

  • Paypal PDT and IPN , how does it work?

    - by slow diver
    PDT Payment Data Transfer is getting the transaction data of the purchase that was made on paypal site and you want to fetch that on your own site and display to the user. Also you may want to store it in your database for archive and tracking purposes. But I cannot exactly follow the documentation here What I am not getting is Once you have activated PDT, every time a buyer makes a website payment and is redirected to your return URL, a transaction token will be passed along as a "GET" variable to this return URL. In order to properly use PDT and display transaction details to your customer, you should fetch the transaction token, variable name "tx", and retreive transaction details from PayPal by constructing an HTTP POST to PayPal. Your POST should be sent to https://www.paypal.com/cgi-bin/webscr. You must post the transaction token using the variable "tx" and the value of the transaction token previously received (e.g. "tx=transaction_token"), and the special identity token using the variable at and the value of your PDT identity token (e.g. "at=identity_token"). You will also need to append a variable named "cmd" with the value "_notify-synch", for example "cmd=_notify-synch", to the POST string. IPN I have setup Instant Payment Notification through setting according to this documentation. This is basically logging into your paypal account and enable IPN while specifying a url where the notification will be sent. This is used to complete an order so that the product can be shipped. What I did is setup a PHP page. I have created a table and whenever that page is called (or hit), it registers an entry in the table so I know a notification came from Paypal. But it does not work either. What am I really doing wrong? The first thing I want to trouble shoot though is when the buyer pays the amount, he is automatically redirected to my site. I have enabled this but automatic redirection just does not work. Instead he is shown the url as an option after payment confirmation is shown. Can someone guide my how the PDT process goes? Where do I make the request for PDT, is it along the very first request (Buy Now button) or it is sent later? Addition I found some good sampling code of how everything should work but it still does not work. I use this code http://officetrio.com/modules/free-php-paypal-ipn-script.php for IPN. I am using this for PDT. This one uses SSL, I changed SSL to regular HTTP (copied paypal version), still does not work. http://ykyuen.wordpress.com/2010/02/17/paypal-payment-data-transfer-sample-code/

    Read the article

  • In which cases build artifacts will be different in different environments

    - by Sundeep
    We are working on automation of deployment using Jenkins. We have different environments - DEV, UAT, PROD. In SVN, we are tagging each release and placing same binaries in DEV, UAT, PROD. The artifacts already contains config files w.r.t each environment but I am not understanding why we are storing binaries in environment folder again. Are there any scenarios where deployment might be different for different environments.

    Read the article

  • The current state of a MERGE Destination for SSIS

    - by jamiet
    Hugo Tap asked me on Twitter earlier today whether or not there existed a SSIS Dataflow Destination component that enabled one to MERGE data into a table rather than INSERT it. Its a common request so I thought it might be useful to summarise the current state of play as regards a MERGE destination for SSIS. Firstly, there is no MERGE destination component in the box; that is, when you install SSIS no MERGE Destination will be available. That being said the SSIS team have made available a MERGE destination component via Codeplex which you can get from http://sqlsrvintegrationsrv.codeplex.com/releases/view/19048. I have never used it so cannot vouch for its usefulness although judging by some of the reviews you might not want to set your expectations too high. Your mileage may vary.   In the past it has occurred to me that a built-in way to provide MERGE from the SSIS pipeline would be highly valuable. I assume that this would have to be provided by the database into which you were merging hence in March 2010 I submitted the following two requests to Connect: BULK MERGE (111 votes at the time of writing) [SSIS] BULK MERGE Destination (15 votes) If you think these would be useful feel free to vote them up and add a comment. Lastly, this one is nothing to do with SSIS but if you want to perform a minimally logged MERGE using T-SQL Sunil Agarwal has explained how at Minimal logging and MERGE statement. @Jamiet

    Read the article

  • Reminder: ATG Live Webcast Feb. 24th: Using the R12 EBS Adapter

    - by Bill Sawyer
    Reminder: Our next ATG Live Webcast is happening on 24-Feb. The event is titled:E-Business Suite R12.x SOA Using the E-Business Suite AdapterThis live one-hour webcast will offer a review of the Service Oriented Architecture (SOA) capabilities within E-Business Suite R12 focusing on the E-Business Suite Adapter. While primarily focused on integrators and developers, understanding SOA capabilities is important for all E-Business Suite technologists and superusers.

    Read the article

  • What is the best Video Conference integrated solution for Us? [closed]

    - by Andrei B
    we are trying to integrate a simple Video Conferencing (open source) solution into our existing application which is written in C++ and it runs on Linux. I am currently looking at using Ekiga (formely known as GnomeMeeting) or Homer Conferencing (short: Homer). My plan is to "integrate" an existing Video Conferencing client into our existing software. Please give me recommendation on which 3rd party application or library to use to add video conferencing feature to our application. PS: Please don't close this question. I asked it on StackOverflow and it got closed, so where am I supposed to ask this question? If not here, then whats the point of asking lol.

    Read the article

  • sp_ssiscatalog v1.0.1.0 now available for download

    - by jamiet
    13 days ago I wrote a blog post entitled Introducing sp_ssiscatalog (v1.0.0.0) in which I first made mention of sp_ssiscatalog, an open source stored procedure intended to make it easy to query the SSIS Catalog. I have been working on some enhancements since then and hence v1.0.1.0 is now available for download from Codeplex. What’s new in this release This release includes the following enhancements: [execution_id] now gets returned in a call to EXEC [dbo].[sp_ssiscatalog] @operation_type='exec'; Filter events by specifying packages to ignore EXEC [dbo].[sp_ssiscatalog] @operation_type='exec',@exec_events_packagesexcluded='SomePackage.dtsx,AnotherPackage.dtsx'; [event_message_id] is now returned in a list of events List of executions can now be filtered via a minimum and maximum execution_id EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_minimum_execution_id=198,@execs_maximum_execution_id=201 Events resultsets now have a field, [event_message_context_xml] that contains an XML document containing all [event_message_context] info (if any exists) Installation instructions Download the zip file at DB v1.0.1.0. It contains two files, SsisReportingPack.dacpac & SSISDB.dacpac Unzip to a folder of your choosing Open a command prompt and change to the directory into which you unzipped the files Execute: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) (/tsn specifies the target server. Change as appropriate.) If everything works OK you’ll see something like the following: or depending on whether the target database already exists or not This will create a database called [SsisReportingPack] which contains [dbo].[sp_ssiscatalog] Feedback is welcomed! @Jamiet

    Read the article

  • How to host a site in another site - with little or no coding

    - by tunmise fasipe
    SUMMARY: All of these happens on Site A User visits site A User enter username and password User click on Login Button User authenticated on Site B behind the scene User is shown a page on Site A that contains his/her profile from Site B as layout/styled from Site B User can click links in the Profile page that links to other area in Site B Meaning: Session has to be maintained somehow I have web application where I store users' password and username. If you logon to this site, you can login with the password and username to have access to your profile. There is another option that requires you to login to my site from your site and have your profile displayed within your site. This is because you might already have a site that your clients know you with. This link is close to what I want to do: http://aspmessageboard.com/showthread.php?t=235069 A user on Site A login to Site B and have the information on site B showing in site A. He should not know whether Site B exists. It should be as if everything is happening in Site A This latter part is what I don't know to implement. I have these ideas: Have a fixed IFrame within your site to contain my site: but I am concerned about size/layout since different clients have different layout/size for their content section. I am thinking of how to maintain session too A webservice: I don't know how feasible this is since the Password and ID are on my server. You may have to send them back and forth. It means client would have to code with my API. But I am not just returning data, I have to show them a page that contains the profile details OpenID, Single-SignOn: Just guessing - but the authentication and data resides on my server. there is nothing to access on your side in this case Examples: like login into facebook within my site and still be able to do post updates, receive notifications Facebook implement some of these with IFrame e.g. the Like button *NOTE: * I have tested the IFrame option. It worked but I still have to remove my site specific content like my page Banner, Side Navigation etc. I was able to login normally as if I was actually on the site. This show my GUI but - style sheet was missing - content not styled with CSS - Any relative url won't work. It would look for that resource relative to the current server. Unless I change links to absolute - Clicking on the LogIn button produces this error: The state information is invalid for this page and might be corrupted. UPDATE: I was reading about REST webservice few days ago and I got this idea: What about the idea of returning an XML from a webservice [REST or SOAP] and providing an XSLT (that I can provide) to display it. Thus they won't have to do much coding?

    Read the article

  • Are there any useful tools to mirror a mailman mailing list as a forum?

    - by mar10
    We have a mailman mailing list however as we all know this is not very user friendly in terms of searching the archives. I am looking at a way to enable the continued functionality of mailman while having a forum linked to it for a more friendly user-interface approach. Is there a forum application that lets you mirror the mailman service so that posts to mailman are sync'd into the forum and posts to the forum are sync'd to mailman?

    Read the article

  • Anyone used Jenkins with Sparkle (the cocoa app update framework)?

    - by Amy Worrall
    Pushing new Sparkle releases of our internal apps is a pain. I have to make the build, make the release notes file, sign the .zip with the private key, and add a new entry to the appcast file tying everything together. I'd love it if Jenkins could help: use the commit messages for the release notes, and automatically do the rest of it. Should I be looking at writing a new Jenkins plugin, or using shell scripting, or is there something already that will do what I want? (A quick Google didn't find anything.) I'm new to Jenkins, and am still trying to get a feel for what it can do. Do any other maintainers of Cocoa apps have any tips?

    Read the article

  • Writing selenium tests, should I just get it done or get it right?

    - by Peter Smith
    I'm attempting to drive my user interface (heavy on javascript) through selenium. I've already tested the rest of my ajax interaction with selenium successfully. However, this one particular method seems to be eluding me because I can't seem to fake the correct click event. I could solve this problem by simply waiting in the test for the user to click a point and then continuing with the test but this seems like a cop out. But I'm really running out of time on my deadline to have this done and working. Should I just get this done and move on or should I spend the extra (unknown) amount of time to fix this problem and be able to have my selenium tests 100% automated?

    Read the article

  • Automating release management and CI on python projects under mercurial VCS

    - by ms4py
    I have a set of Python projects which are under the mercurial VCS. I would like to automate the following tasks: Run the test suite for every commit (CI). Make a source distribution for every commit, which has a tag in mercurial. This is regarded as a new release. Copy the distribution to a special repository. There is Jenkins as a proposal for similar questions, but I'm not sure if it can handle the release management like intended.

    Read the article

  • How to integrate a PHP CMS with paypal so that only users who completed a payment can register and authenticate?

    - by ibiza
    I am currently using a PHP CMS - cmsmadesimple - in order to create a website where services will be sold. I intend to use Paypal 'Buy Now' buttons in order to offer a few packages that will be renewable every 1-month or every 3-months and that grant access to the secure content of the website for a given period of time. Everything is going well so far but I am somewhat at loss for the user registration process as I have a few constraints I would like to use and it would be nice to automate the process if possible. Here are the constraints : User should be able to register to my website and choose a password himself Only users that paid should be able to register Access permissions should be disabled automatically after the service period if the package is not renewed And here is the process which I am thinking of : User clicks 'buy' on my website User is redirected on Paypal and completes the payment The paypal email used to pay should be returned to my server and somehow stored If it is a new email, user needs to register to my website (else if it is a returning customer, the deactivation flag for payment stopped should be removed to give back access) If a user does not renew his subscription, there should be a deactivation flag automatically set to the email used in order to lock access until next payment. Ideally, no human intervention is needed. What is the best way to implement all this? I am a bit at loss. I found this article that explained a few things and even has a nice code snippet, except that I'm not sure where to plug it. Thanks all

    Read the article

  • We have our standards, and we need them

    - by Tony Davis
    The presenter suddenly broke off. He was midway through his section on how to apply to the relational database the Continuous Delivery techniques that allowed for rapid-fire rounds of development and refactoring, while always retaining a “production-ready” state. He sighed deeply and then launched into an astonishing diatribe against Database Administrators, much of his frustration directed toward Oracle DBAs, in particular. In broad strokes, he painted the picture of a brave new deployment philosophy being frustratingly shackled by the relational database, and by especially by the attitudes of the guardians of these databases. DBAs, he said, shunned change and “still favored tools I’d have been embarrassed to use in the ’80′s“. DBAs, Oracle DBAs especially, were more attached to their vendor than to their employer, since the former was the primary source of their career longevity and spectacular remuneration. He contended that someone could produce the best IDE or tool in the world for Oracle DBAs and yet none of them would give a stuff, unless it happened to come from the “mother ship”. I sat blinking in astonishment at the speaker’s vehemence, and glanced around nervously. Nobody in the audience disagreed, and a few nodded in assent. Although the primary target of the outburst was the Oracle DBA, it made me wonder. Are we who work with SQL Server, database professionals or merely SQL Server fanbois? Do DBAs, in general, have an image problem? Is it a good career-move to be seen to be holding onto a particular product by the whites of our knuckles, to the exclusion of all else? If we seek a broad, open-minded, knowledge of our chosen technology, the database, and are blessed with merely mortal powers of learning, then we like standards. Vendors of RDBMSs generally don’t conform to standards by instinct, but by customer demand. Microsoft has made great strides to adopt the international SQL Standards, where possible, thanks to considerable lobbying by the community. The implementation of Window functions is a great example. There is still work to do, though. SQL Server, for example, has an unusable version of the Information Schema. One cast-iron rule of any RDBMS is that we must be able to query the metadata using the same language that we use to query the data, i.e. SQL, and we do this by running queries against the INFORMATION_SCHEMA views. Developers who’ve attempted to apply a standard query that works on MySQL, or some other database, but doesn’t produce the expected results on SQL Server are advised to shun the Standards-based approach in favor of the vendor-specific one, using the catalog views. The argument behind this is sound and well-documented, and of course we all use those catalog views, out of necessity. And yet, as database professionals, committed to supporting the best databases for the business, whatever they are now and in the future, surely our heart should sink somewhat when we advocate a vendor specific approach, to a developer struggling with something as simple as writing a guard clause. And when we read messages on the Microsoft documentation informing us that we shouldn’t rely on INFORMATION_SCHEMA to identify reliably the schema of an object, in SQL Server!

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • DTLoggedExec 1.0.0.2 Released

    - by Davide Mauri
    These last days has been full of work and the next days, up until the end of july, will follow the same ultra-busy scheme. This makes the improvement of DTLoggedExec a little bit slower than what I desire, but nonetheless Friday I’ve been able to relase an updated version of the tool that fixes a bug and add a very convenient option to make even more straightforward the creationg of execution logs: [bugfix] Fixed a bug that prevented loading packages from SSIS Package Store [new] Added support for {filename} placeholder in both Data Flow Profiling and CSV Log Provider The added feature allow to generate DataFlow profile logs and CSV logs that has the same name of the package that generated them, es: DTLoggedExec.exec /FILE:”MyPackage.dtsx” /LPA:"FILE=C:\Log\{filename}_{date}_{time}.dtsCSVLog" Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • CI tests to enforce specific development rules - good practice?

    - by KeithS
    The following is all purely hypothetical and any particular portion of it may or may not accurately describe real persons or situations, whether living, dead or just pretending. Let's say I'm a senior dev or architect in charge of a dev team working on a project. This project includes a security library for user authentication/authorization of the application under development. The library must be available for developers to edit; however, I wish to "trust but verify" that coders are not doing things that could compromise the security of the finished system, and because this isn't my only responsibility I want it to be done in an automated way. As one example, let's say I have an interface that represents a user which has been authenticated by the system's security library. The interface exposes basic user info and a list of things the user is authorized to do (so that the client app doesn't have to keep asking the server "can I do this?"), all in an immutable fashion of course. There is only one implementation of this interface in production code, and for the purposes of this post we can say that all appropriate measures have been taken to ensure that this implementation can only be used by the one part of our code that needs to be able to create concretions of the interface. The coders have been instructed that this interface and its implementation are sacrosanct and any changes must go through me. However, those are just words; the security library's source is open for editing by necessity. Any of my devs could decide that this secured, private, hash-checked implementation needs to be public so that they could do X, or alternately they could create their own implementation of this public interface in a different library, exposing the hashing algorithm that provides the secure checksum, in order to do Y. I may not be made aware of these changes so that I can beat the developer over the head for it. An attacker could then find these little nuggets in an unobfuscated library of the compiled product, and exploit it to provide fake users and/or falsely-elevated administrative permissions, bypassing the entire security system. This possibility keeps me awake for a couple of nights, and then I create an automated test that reflectively checks the codebase for types deriving from the interface, and fails if it finds any that are not exactly what and where I expect them to be. I compile this test into a project under a separate folder of the VCS that only I have rights to commit to, have CI compile it as an external library of the main project, and set it up to run as part of the CI test suite for user commits. Now, I have an automated test under my complete control that will tell me (and everyone else) if the number of implementations increases without my involvement, or an implementation that I did know about has anything new added or has its modifiers or those of its members changed. I can then investigate further, and regain the opportunity to beat developers over the head as necessary. Is this considered "reasonable" to want to do in situations like this? Am I going to be seen in a negative light for going behind my devs' backs to ensure they aren't doing something they shouldn't?

    Read the article

  • VCS strategy with TeamCity and CI

    - by Luke Puplett
    I'm planning a strategy which seeks to allow automated deployment of a website codebase into QA and production on check-in. We're using the fabulous TeamCity. We want to control release to live production; i.e. not have every check-in on Trunk go live. So my plan is to use Trunk as QA. Committing to Trunk triggers deployment to QA. I will then have a Production branch which also triggers deployment on commit, to the live site. The idea is simply that Trunk represents the mainline codebase but it hasn't gone live yet. We can branch features and do daily pulls from Trunk into those feature branches as per normal and merge/re-integrate into Trunk when we're happy for it to go to QA. When the BAs give the nod, we then smash a bottle of champagne and merge Trunk to Production and out she goes. I've never seen it done like this. Other greenfield CI strategies involve hiding features and code from production via config - this codebase can't cope with that - or just having CI on QA and taking cuts and manually pushing to live. Does my plan sound alright?

    Read the article

  • ODBC in SSIS 2012

    - by jamiet
    In August 2011 the SQL Server client team published a blog post entitled Microsoft is Aligning with ODBC for Native Relational Data Access in which they basically said "OLE DB is the past, ODBC is the future. Deal with it.". From that blog post:We encourage you to adopt ODBC in the development of your new and future versions of your application. You don’t need to change your existing applications using OLE DB, as they will continue to be supported on Denali throughout its lifecycle. While this gives you a large window of opportunity for changing your applications before the deprecation goes into effect, you may want to consider migrating those applications to ODBC as a part of your future roadmap.I recently undertook a project using SSIS2012 and heeded that advice by opting to use ODBC Connection Managers rather than OLE DB Connection Managers. Unfortunately my finding was that the ODBC Connection Manager is not yet ready for primetime use in SSIS 2012. The main issue I found was that you can't populate an Object variable with a recordset when using an Execute SQL Task connecting to an ODBC data source; any attempt to do so will result in an error:"Disconnected recordsets are not available from ODBC connections." I have filed a bug on Connect at ODBC Connection Manager does not have same funcitonality as OLE DB. For this reason I strongly recommend that you don't make the move to ODBC Connection Managers in SSIS just yet - best to wait for the next version of SSIS before doing that.I found another couple of issues with the ODBC Connection Manager that are worth keeping in mind:It doesn't recognise System Data Source Names (DSNs), only User DSNs (bug filed at ODBC System DSNs are not available in the ODBC Connection Manager)  UPDATE: According to a comment on that Connect item this may only be a problem on 64bit.In the OLE DB Connection Manager parameter ordinals are 0-based, in the ODBC Connection Manager they are 1-based (oh I just can't wait for the upgrade mess that ensues from this one!!!)You have been warned!@jamiet

    Read the article

  • How should I expand Jenkins to help me release? [closed]

    - by Amy Worrall
    Pushing new Sparkle releases of our internal apps is a pain. I have to make the build, make the release notes file, sign the .zip with the private key, and add a new entry to the appcast file tying everything together. I'd love it if Jenkins could help: use the commit messages for the release notes, and automatically do the rest of it. Should I be looking at writing a new Jenkins plugin, or using shell scripting, or is there something already that will do what I want? (A quick Google didn't find anything.)

    Read the article

  • Where do the responsibilities of build tools end and those of CI tools start?

    - by BrandonV
    In the delivery of software, and within the sense of the deployment pipeline, where do the responsibilities of build tools, like Maven, end, and the responsibilities of CI start? As a rough example of a problem that arises; should build tools have any responsibility to the configuration and execution of acceptance tests when they are further down the pipeline than actually building the artifact? I'd like an answer that addresses in the sense of deployment lifecycle phases rather than in specifics, like my example. Although examples would help bolster the answer.

    Read the article

  • Extension to add button "Report to Bugzilla"?

    - by Alois Mahdal
    We have: internal MediaWiki installation for internal documents (we don't use it in completely wiki-like style—only maintainers should normally make changes) internal Bugzilla installation for internal issues including these internal documents on the MediaWiki site Now only the icing on the cake is missing: an automatic button that would appear on each page, being able to open a Bugzilla page pre-fill some fields with information about that page Basically, name What I imagine as a best solution would be a sibling to the ubiquitous "[edit]" button, probably sitting next to it, like in this mock-up:

    Read the article

  • Coping with build order requirements in automated builds

    - by Derecho
    I have three Scala packages being built as separate sbt projects in separate repos with a dependency graph like this: M---->D ^ ^ | | +--+--+ ^ | S S is a service. M is a set of message classes shared between S and another service. D is a DAL used by S and the other service, and some of its model appears in the shared messages. If I make a breaking change to all three, and push them up to my Git repo, a build of S will be kicked off in Jenkins. The build will only be successful if, when S is pushed, M and D have already been pushed. Otherwise, Jenkins will find it doesn't have the right dependent package versions available. Even pushing them simultaneously wouldn't be enough -- the dependencies would have to be built and published before the dependent job was even started. Making the jobs dependent in Jenkins isn't enough, because that would just cause the previous version to be built, resulting in an artifact that doesn't have the needed version. Is there a way to set things up so that I don't have to remember to push things in the right order? The only way I can see it working is if there was a way that a build could go into a pending state if its dependencies weren't available yet. I feel like there's a simple solution I'm missing. Surely people deal with this a lot?

    Read the article

  • Error: "Suite Integration Toolkit Executable has stopped working" during VS2010 installation

    - by Daniel
    After uinstalling VS2012 and installing back Visual Studio 2010 C# Express I was getting strane warning: 2008 is not a valid number so I decided to reinstall VS2010, but I couldn't uinstall it. I was getting error: Suite Integration Toolkit Executable has stopped working. I managed to uinstall VS2010 with VS2010 Uinstall Tool, but now I want to install Visual Studio 2010 C# Express back. I'm getting Suite Integration Toolkit[...] error again when I try to install it using web installer. Could you help me resolve my problem? And I've got Tablet PC Components function disabled in Control Panel / Programs and Utilites.

    Read the article

  • TFS Integration with Rational ClearQuest and Requirement Manager

    - by Kangkan
    I am working on an integration approach for integrating Rationa (IBM Jazz) Requirement Manager (RM) and Clear Quest (CQ) with TFS. As the teams are moving from ClearCase to TFS, what we are looking at is still being able to manage the requirements in RM and manage testing using CQ. The flow will be something like: Requirements are planned and detailed in RM Create work items in TFS connected to the Requirements in RM Create design and code using VS2010 and managing the version control in TFS Creating test plans and test cases in CQ (connected to requirements in RM) Run test against builds in TFS Publish test results in CQ against builds in TFS Run reports in RM, CQ and TFS that links up the items across the platforms. I have started looking at TFS Integration platform. But shall like to have your guidance for an early resolution and better solution approach.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >