Search Results

Search found 12658 results on 507 pages for 'tfs api'.

Page 16/507 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • TFS: comparing changesets

    - by Ram
    In TFS we can find "compare" a file between 2 changesets. Is it possible to compare 2 changesets. Say take changeset "r" as reference and compare it with changeset "s" and find the files/folders which were added/removed/delted/edited ?

    Read the article

  • How do I manage the technical debate over WCF vs. Web API?

    - by Saeed Neamati
    I'm managing a team of like 15 developers now, and we are stuck at a point on choosing the technology, where the team is broken into two completely opposite teams, debating over usage of WCF vs. Web API. Team A which supports usage of Web API, brings forward these reasons: Web API is just the modern way of writing services (Wikipedia) WCF is an overhead for HTTP. It's a solution for TCP, and Net Pipes, and other protocols WCF models are not POCO, because of [DataContract] & [DataMember] and those attributes SOAP is not as readable and handy as JSON SOAP is an overhead for network compared to JSON (transport over HTTP) No method overloading Team B which supports the usage of WCF, says: WCF supports multiple protocols (via configuration) WCF supports distributed transactions Many good examples and success stories exist for WCF (while Web API is still young) Duplex is excellent for two-way communication This debate is continuing, and I don't know what to do now. Personally, I think that we should use a tool only for its right place of usage. In other words, we'd better use Web API, if we want to expose a service over HTTP, but use WCF when it comes to TCP and Duplex. By searching the Internet, we can't get to a solid result. Many posts exist for supporting WCF, but on the contrary we also find people complaint about it. I know that the nature of this question might sound arguable, but we need some good hints to decide. We're stuck at a point where choosing a technology by chance might make us regret it later. We want to choose with open eyes. Our usage would be mostly for web, and we would expose our services over HTTP. In some cases (say 5 to 10 percent) we might need distributed transactions though. What should I do now? How do I manage this debate in a constructive way?

    Read the article

  • IIS7 binding to subdomain causing authentication errors (TFS 2010)

    - by Tommy Jakobsen
    I'm trying to bind a IIS web site (Team Foundation Services 2010) to a subdomain, which is causing authentication errors. First I'll explain what I've done to set it up. This is the fist time I do this, so please correct me if I'm wrong. The web server is a stand-alone Windows Server 2008 R2 x64, running IIS7 with .NET Framework 4. I have the following A-records, pointing to my server: server.mydomain.com *.server.mydomain.com So all subdomains of server.mydomain.com points to the server. In IIS7 I have a web site (TFS 2010) on port 8080, with a virtual directory (named tfs) that is using Windows Authentication. I have one binding on the web site pointing to all unassigned IP addresses, port 8080 and having a host name of tfs.server.mydomain.com. Now, shouldn't I be able to access the virtual directory through: http://tfs.server.mydomain.com/tfs That is not working. However, I can access it through: http://tfs.server.mydomain.com:8080/tfs But, it won't let me authenticate using a Windows account (Server\Username). A windows account that I can authenticate with, when accessing the site through http://localhost:8080/tfs. What am I missing here?

    Read the article

  • What is the correct term for - server/client database sync via API?

    - by Daniel
    Forgive the vague question title. I've been programming mobile apps for 3 years now, and I've got a little too far from the web services and server side code then I probably should have. Anyway, I'm doing a personal project now and I want to create an web API for it. One of my requirements is to check for updates from my app, so I would send a timestamp to the API. I've used many APIs that my clients prepared for me and only now am I appreciating their work ! What is the term or technique used to create an API backed by a database which tracks changes via dates/timestamps, basically an effective way for me to query changes occurring since a timestamp. Simply put, I want that my app can call my API in order to sync new data and changed data from the server, to the app. The app would only have a timestamp of the last time it synced with the server. Would I have a log table for each data table in my database which adds a record for each change? Then I could query all changes with a timestamp superior to the one passed to the API. Can anyone point me in the right direction on this?

    Read the article

  • Ubuntu One API Java - how to use REST and AccessToken?

    - by Michael
    I am writing a java app in eclipse that backups data to several consumer-cloud-services encrypted and redundant. So far, I successfully implemented the authentication process, as it is described in the documentation. At this point, I do not know how to proceed. The next step would be implementing the auth with the stored AccessToken and afterwars implementing upload/download/listing functionality through the REST API. I think I have to store the String oauth.getSerialized(). How do I authenticate with this String afterwards? This does not work e.g.: AuthenticateResponse oauth = api.authenticate(serialized); api.setAuthorizer(new OAuthAuthorizer(oauth)); Can someone tell me please, how I can use the REST API with java? There is no explanation or link in the developers area as far as I saw. And btw, I wasted at least one hour trying to fix errors, because some needed libraries are listet after the example code. :/

    Read the article

  • Will GMails Greasemoney API help me: composing and sending messages, adding a button to the compose view, listing and sending drafts [migrated]

    - by Kent
    I want to create a Greasemonkey script for GMail and I've browsed through the GMail Greasemonkey 1.0 API documentation. I haven't fully understood what the API actually provides, which leads me to ask a few concrete questions. How will the API help if I want to: Add a button to the Compose view which executes some of my code. Compose and send a new message from scratch. List the current drafts. Pick a draft and send it. From what I can see it'll help me with 1 above, but I don't see any real interaction API parts which can help me with 2-4.

    Read the article

  • Should static parameters in an API be part of each method?

    - by jschoen
    I am currently creating a library that is a wrapper for an online API. The obvious end goal is to make it as easy for others to use as possible. As such I am trying to determine the best approach when it comes to common parameters for the API. In my current situation there are 3 (consumer key, consumer secret, and and authorization token). They are essentially needed in every API call. My question is should I make these 3 parameters required for each method or is there a better way. I see my current options as being: Place the parameters in each method call public ApiObject callMethod(String consumerKey, String consumerSecret, String token, ...) This one seems reasonable, but seems awfully repetitive to me. Create a singleton class that the user must initialize before calling any api methods. This seems wrong, and would essentially limit them to accessing one account at a time via the API (which may be reasonable, I dunno). Make them place them in a properties file in their project. That way I can load the properties that way and store them. This seems similar to the singleton to me, but they would not have to explicitly call something to initialize these values. Is there another option I am not seeing, or a more common practice in this situation that I should be following?

    Read the article

  • REST API Best practice: How to accept as input a list of parameter values

    - by whatupwilly
    Hi All, We are launching a new REST API and I wanted some community input on best practices around how we should have input parameters formatted: Right now, our API is very JSON-centric (only returns JSON). The debate of whether we want/need to return XML is a separate issue. As our API output is JSON centric, we have been going down a path where our inputs are a bit JSON centric and I've been thinking that may be convenient for some but weird in general. For example, to get a few product details where multiple products can be pulled at once we currently have: http://our.api.com/Product?id=["101404","7267261"] Should we simplify this as: http://our.api.com/Product?id=101404,7267261 Or is having JSON input handy? More of a pain? We may want to accept both styles but does that flexibility actually cause more confusion and head aches (maintainability, documentation, etc.)? A more complex case is when we want to offer more complex inputs. For example, if we want to allow multiple filters on search: http://our.api.com/Search?term=pumas&filters={"productType":["Clothing","Bags"],"color":["Black","Red"]} We don't necessarily want to put the filter types (e.g. productType and color) as request names like this: http://our.api.com/Search?term=pumas&productType=["Clothing","Bags"]&color=["Black","Red"] Because we wanted to group all filter input together. In the end, does this really matter? It may be likely that there are so many JSON utils out there that the input type just doesn't matter that much. I know our javascript clients making AJAX calls to the API may appreciate the JSON inputs to make their life easier. Thanks, Will

    Read the article

  • retrieve events where uid is the creator and application id is the admin - Facebook API

    - by Anup Parekh
    I would like to know if there is a Facebook API call to retrieve the events (eids) for all the events a user has created using my facebook connect application. The events are created using the following REST api call: https://api.facebook.com/method/events.create?event_info=' . $e_i . '&access_token=' . $cookie['access_token'] $e_i is the event info array where the 'host' value is set to 'Me' as follows $event_info['host'] = 'Me'; On Facebook events under the "Created by:" section it lists "My user name,Application Name", I presume this is because I am the creator and the application is the admin as stated in the REST api documentation http://developers.facebook.com/docs/reference/rest/events.create/ Unfortunately I cannot seem to find out how (neither REST nor GPRAPH API) to return a list of events where I am the creator and the application is the admin as in the above scenario. If this is possible I would really appreciate some assistance with how it is done. So far I have tried: REST API events.get using uid=application_id. This only returns events created by the application not those including the user who created them GRAPH API https://graph.facebook.com/me/events?fields=owner&access_token=... this returns all the events for 'me' but not where the application is also the admin. It seems strange that there's no reference to the linkage between the event creator and the event admin through the API but in Facebook it is able to pull both and display them on the event details.

    Read the article

  • Routing trouble for RESTful API - Rails

    - by aressidi
    I'm building out an API for web app that I've been working on for some time. I've started with the User model. The user portion of the API will allow remote clients to a) retrieve user data, b) update user information and c) create new users. I've gotten all of this to work, but it doesn't seem like its setup correctly. Here are my questions: Should the API endpoint be users or user? What's the best practice? I have to add the action to the end, which I would expect to be picked up instead by the request type so I don't have to specify it explicitly. How do I get my routes setup properly as not to have to include the method for protected actions? Let me give some examples: Get request for show - want it to work without the "show" curl -u rmbruno:blah http://app.local/api/users/show Put request for update - want it to work without the "update" curl -X put -F 'user[forum_notifications]=true' -u rmbruno:blah http://app.local/api/users/update Create - works with or without 'create' which is what I want for all these actions curl -X post -F 'user[login]=mamafatta' -F 'user[email][email protected]' -F 'user[password]=12345678' http://twye.local/api/users/ How do I structure routes to not require the action name? Isn't that the common way to to RESTful APIs? Here is my route for the API now: map.namespace :api do |route| route.resources :users route.resources :weight end I'm using restful authentication which is handling the http auth in curl. Any guidance on the routes issues and best practice on singular versus plural would be really helpful. Thanks! -A

    Read the article

  • Parsing back to 'messy' API strcuture

    - by Eric Fail
    I'm fetching data from an online database (REDcap) via API and the data gets delivered in as comma separated string like this, RAW.API <- structure("id,event_arm,name,dob,pushed_text,pushed_calc,complete\n\"01\",\"event_1_arm_1\",\"John\",\"1979-05-01\",\"\",\"\",2\n\"01\",\"event_2_arm_1\",\"John\",\"2012-09-02\",\"abc\",\"123\",1\n\"01\",\"event_3_arm_1\",\"John\",\"2012-09-10\",\"\",\"\",2\n\"02\",\"event_1_arm_1\",\"Mary\",\"1951-09-10\",\"def\",\"456\",2\n\"02\",\"event_2_arm_1\",\"Mary\",\"1978-09-12\",\"\",\"\",2\n", "`Content-Type`" = structure(c("text/html", "utf-8"), .Names = c("", "charset"))) I have this script that nicely parses it into a data frame, (df <- read.table(file = textConnection(RAW.API), header = TRUE, sep = ",", na.strings = "", stringsAsFactors = FALSE)) id event_arm name dob pushed_text pushed_calc complete 1 1 event_1_arm_1 John 1979-05-01 <NA> NA 2 2 1 event_2_arm_1 John 2012-09-02 abc 123 1 3 1 event_3_arm_1 John 2012-09-10 <NA> NA 2 4 2 event_1_arm_1 Mary 1951-09-10 def 456 2 5 2 event_2_arm_1 Mary 1978-09-12 <NA> NA 2 I then do some calculations and write them to pushed_text and pushed_calc whereafter I need to format the data back to the messy comma separated structure it came in. I imagine something like this, API.back <- `some magic command`(df, ...) identical(RAW.API, API.back) [1] TRUE Some command that can format my data from the data frame I made, df, back to the structure that the raw API-object came in, RAW.API. Any help would be very appreciated.

    Read the article

  • TFS API Change WorkItem CreatedDate And ChangedDate To Historic Dates

    - by Tarun Arora
    There may be times when you need to modify the value of the fields “System.CreatedDate” and “System.ChangedDate” on a work item. Richard Hundhausen has a great blog with ample of reason why or why not you should need to set the values of these fields to historic dates. In this blog post I’ll show you, Create a PBI WorkItem linked to a Task work item by pre-setting the value of the field ‘System.ChangedDate’ to a historic date Change the value of the field ‘System.Created’ to a historic date Simulate the historic burn down of a task type work item in a sprint Explain the impact of updating values of the fields CreatedDate and ChangedDate on the Sprint burn down chart Rules of Play      1. You need to be a member of the Project Collection Service Accounts              2. You need to use ‘WorkItemStoreFlags.BypassRules’ when you instantiate the WorkItemStore service // Instanciate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules);      3. You cannot set the ChangedDate         - Less than the changed date of previous revision         - Greater than current date Walkthrough The walkthrough contains 5 parts 00 – Required References 01 – Connect to TFS Programmatically 02 – Create a Work Item Programmatically 03 – Set the values of fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates 04 – Results of our experiment Lets get started………………………………………………… 00 – Required References Microsoft.TeamFoundation.dll Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll 01 – Connect to TFS Programmatically I have a in depth blog post on how to connect to TFS programmatically in case you are interested. However, the code snippet below will enable you to connect to TFS using the Team Project Picker. // Services I need access to globally private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; // Connect to TFS Using Team Project Picker public static bool ConnectToTfs() { var isSelected = false; // The user is allowed to select only one project var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); // The TFS project collection _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { // The selected Team Project _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } 02 – Create a Work Item Programmatically In the below code snippet I have create a Product Backlog Item and a Task type work item and then link them together as parent and child. Note – You will have to set the ChangedDate to a historic date when you created the work item. Remember, If you try and set the ChangedDate to a value earlier than last assigned you will receive the following exception… TF26212: Team Foundation Server could not save your changes. There may be problems with the work item type definition. Try again or contact your Team Foundation Server administrator. If you notice below I have added a few seconds each time I have modified the ‘ChangedDate’ just to avoid running into the exception listed above. // Create Linked Work Items and return Ids private static List<int> CreateWorkItemsProgrammatically() { // Instantiate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules); // List of work items to return var listOfWorkItems = new List<int>(); // Create a new Product Backlog Item var p = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Product Backlog Item"]); p.Title = "This is a new PBI"; p.Description = "Description"; p.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); p.AreaPath = _selectedTeamProject.Name; p["Effort"] = 10; // Just double checking that ByPassRules is set to true if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (p.Validate().Count == 0) { p.Save(); listOfWorkItems.Add(p.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var t = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Task"]); t.Title = "This is a task"; t.Description = "Task Description"; t.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); t.AreaPath = _selectedTeamProject.Name; t["Remaining Work"] = 10; if (_wis.BypassRules) { t.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (t.Validate().Count == 0) { t.Save(); listOfWorkItems.Add(t.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in t.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var linkTypEnd = _wis.WorkItemLinkTypes.LinkTypeEnds["Child"]; p.Links.Add(new WorkItemLink(linkTypEnd, t.Id) {ChangedDate = Convert.ToDateTime("2012-01-01").AddSeconds(20)}); if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01").AddSeconds(20); } if (p.Validate().Count == 0) { p.Save(); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } return listOfWorkItems; } 03 – Set the value of “Created Date” and Change the value of “Changed Date” to Historic Dates The CreatedDate can only be changed after a work item has been created. If you try and set the CreatedDate to a historic date at the time of creation of a work item, it will not work. // Lets do a work item effort burn down simulation by updating the ChangedDate & CreatedDate to historic Values private static void WorkItemChangeSimulation(IEnumerable<int> listOfWorkItems) { foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); switch (wi.Type.Name) { case "ProductBacklogItem": if (wi.State.ToLower() == "new") wi.State = "Approved"; // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed Date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; case "Task": // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; } } // A mock sprint start date var sprintStart = DateTime.Today.AddDays(-5); // A mock sprint end date var sprintEnd = DateTime.Today.AddDays(5); // What is the total Sprint duration var totalSprintDuration = (sprintEnd - sprintStart).Days; // How much of the sprint have we already covered var noOfDaysIntoSprint = (DateTime.Today - sprintStart).Days; // Get the effort assigned to our tasks var totalEffortRemaining = QueryTaskTotalEfforRemaining(listOfWorkItems); // Defining how much effort to burn every day decimal dailyBurnRate = totalEffortRemaining / totalSprintDuration < 1 ? 1 : totalEffortRemaining / totalSprintDuration; // we have just created one task var totalNoOfTasks = 1; var simulation = sprintStart; var currentDate = DateTime.Today.Date; // Carry on till effort has been burned down from sprint start to today while (simulation.Date != currentDate.Date) { var dailyBurnRate1 = dailyBurnRate; // A fixed amount needs to be burned down each day while (dailyBurnRate1 > 0) { // burn down bit by bit from all unfinished task type work items foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); var isDirty = false; // Set the status to in progress if (wi.State.ToLower() == "to do") { wi.State = "In Progress"; isDirty = true; } // Ensure that there is enough effort remaining in tasks to burn down the daily burn rate if (QueryTaskTotalEfforRemaining(listOfWorkItems) > dailyBurnRate1) { // If there is less than 1 unit of effort left in the task, burn it all if (Convert.ToDecimal(wi["Remaining Work"]) <= 1) { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } else { // How much to burn from each task? var toBurn = (dailyBurnRate / totalNoOfTasks) < 1 ? 1 : (dailyBurnRate / totalNoOfTasks); // Check that the task has enough effort to allow burnForTask effort if (Convert.ToDecimal(wi["Remaining Work"]) >= toBurn) { wi["Remaining Work"] = Convert.ToDecimal(wi["Remaining Work"]) - toBurn; dailyBurnRate1 = dailyBurnRate1 - toBurn; isDirty = true; } else { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } } } else { dailyBurnRate1 = 0; } if (isDirty) { if (Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).Date == simulation.Date) { wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(20); } else { wi.Fields["System.ChangedDate"].Value = simulation.AddSeconds(20); } wi.Save(); } } } // Increase date by 1 to perform daily burn down by day simulation = Convert.ToDateTime(simulation).AddDays(1); } } // Get the Total effort remaining in the current sprint private static decimal QueryTaskTotalEfforRemaining(List<int> listOfWorkItems) { var unfinishedWorkInCurrentSprint = _wis.GetQueryDefinition( new Guid(QueryAndGuid.FirstOrDefault(c => c.Key == "Unfinished Work").Value)); var parameters = new Dictionary<string, object> { { "project", _selectedTeamProject.Name } }; var q = new Query(_wis, unfinishedWorkInCurrentSprint.QueryText, parameters); var results = q.RunLinkQuery(); var wis = new List<WorkItem>(); foreach (var result in results) { var _wi = _wis.GetWorkItem(result.TargetId); if (_wi.Type.Name == "Task" && listOfWorkItems.Contains(_wi.Id)) wis.Add(_wi); } return wis.Sum(r => Convert.ToDecimal(r["Remaining Work"])); }   04 – The Results If you are still reading, the results are beautiful! Image 1 – Create work item with Changed Date pre-set to historic date Image 2 – Set the CreatedDate to historic date (Same as the ChangedDate) Image 3 – Simulate of effort burn down on a task via the TFS API   Image 4 – The history of changes on the Task. So, essentially this task has burned 1 hour per day Sprint Burn Down Chart – What’s not possible? The Sprint burn down chart is calculated from the System.AuthorizedDate and not the System.ChangedDate/System.CreatedDate. So, though you can change the System.ChangedDate and System.CreatedDate to historic dates you will not be able to synthesize the sprint burn down chart. Image 1 – By changing the Created Date and Changed Date to ‘18/Oct/2012’ you would have expected the burn down to have been impacted, but it won’t be, because the sprint burn down chart uses the value of field ‘System.AuthorizedDate’ to calculate the unfinished work points. The AsOf queries that are used to calculate the unfinished work points use the value of the field ‘System.AuthorizedDate’. Image 2 – Using the above code I burned down 1 hour effort per day over 5 days from the task work item, I would have expected the sprint burn down to show a constant burn down, instead the burn down shows the effort exhausted on the 24th itself. Simply because the burn down is calculated using the ‘System.AuthorizedDate’. Now you would ask… “Can I change the value of the field System.AuthorizedDate to a historic date” Unfortunately that’s not possible! You will run into the exception ValidationException –  “TF26194: The value for field ‘Authorized Date’ cannot be changed.” Conclusion - You need to be a member of the Project Collection Service account group in order to set the fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates - You need to instantiate the WorkItemStore using the flag ByPassValidation - The System.ChangedDate needs to be set to a historic date at the time of work item creation. You cannot reset the ChangedDate to a date earlier than the existing ChangedDate and you cannot reset the ChangedDate to a date greater than the current date time. - The System.CreatedDate can only be reset after a work item has been created. You cannot set the CreatedDate at the time of work item creation. The CreatedDate cannot be greater than the current date. You can however reset the CreatedDate to a date earlier than the existing value. - You will not be able to synthesize the Sprint burn down chart by changing the value of System.ChangedDate and System.CreatedDate to historic dates, since the burn down chart uses AsOf queries to calculate the unfinished work points which internally uses the System.AuthorizedDate and NOT the System.ChangedDate & System.CreatedDate - System.AuthorizedDate cannot be set to a historic date using the TFS API Read other posts on using the TFS API here… Enjoy!

    Read the article

  • Is Google Maps API V3 good enough to use now?

    - by Haroldo
    Firstly, only reply if you have experience using API V3 (i can speculate myself!) I had a little go with V3 and it looked great but would love to hear from someone who's given it a bit of use before I start working with it and deploy it on a live site. I'm only looking to do very basic things: put markers on a map custom markers info bubbles It all looks very easy with v3: http://www.svennerberg.com/2009/06/google-maps-api-3-the-basics/ is it stable enough?

    Read the article

  • Add a wall post to a page or application wall as page or application with facebook graph API

    - by blauesocke
    Hi, I wan't to create a new wall post on a appliaction page or a "normal" page with the facebook graph API. Is there a way to "post as page"? With the old REST-API it worked like this: $facebook->api_client->stream_publish($message, NULL, $links, $targetPageId, $asPageId); So, if I passed equal IDs for $targetPageId and $asPageId I was able to post a "real" wall post not caused by my own facebook account. Thanks!

    Read the article

  • Caching API Proxy Server

    - by edc1591
    I need to have a server that caches API responses and then forwards them along to a desktop app. I don't really have much experience with this, so I have a few questions. First of all, what kind of server should I get? I already use Linode for my websites, so ideally I'd like to go with them. I expect to get anywhere from 30 million to 40 million requests to my proxy server each month. Will a 512 Linode be able to support that? Also, is there any software out there that does this already, or will I have to write my own? The API responses are roughly 10 KB each on average, so doing the math, that's a lot of data each month. Should I just add more transfer to whatever server I buy, or can I somehow compress the API responses before sending them off to the user? Thanks for any help.

    Read the article

  • Controlling access to my API using SSH public key (not SSL)

    - by tharrison
    I have the challenge of implementing an API to be consumed by relatively non-technical clients -- pasting some sample code into their WordPress or homegrown PHP site is probably as much as we can ask. Asking them to install SSL on their servers ain't happening. So I am seeking a simple yet secure way to authenticate API clients. OAuth is the obvious solution, but I don't think it passes the "simple" test. Adding a client id and hashed secret as a parameter to the requests is closer -- it's not hard to do md5($secret . $client_id) or whatever the php would be. It seems to me that if client requests could use the same approach as SSH public keys (client gives us a key from their server(s) there should be some existing magic to make all of the subsequent transactions transparently work just as regular HTTP API requests. I am still working this out (obviously :-), so if I am being an idiot, it would be nice to know why. Thanks!

    Read the article

  • Work Item Visualizer for TFS 2010 - New Extension

    - by MikeParks
    I released another new extension to the Visual Studio Gallery again today called Work Item Visualizer for TFS 2010. I've only heard positive things about it so far, hopefully it stays that way :) Basically, it creates a diagram of all work items linked to a work item ID which the user specifies in a search box. This extension was coded using DGML (the same graph rendering language used for the Visual Studio 2010 Architecture Tools). It was pretty cool getting a chance to create something using some of the newest technology out there. Well, I just wanted to throw a blog up to get the word out on it a little more. If you're using Visual Studio 2010 with Team Foundation Server 2010, feel free to check it out! Thanks everyone. Download Link: http://visualstudiogallery.msdn.microsoft.com/en-us/a35b6010-750b-47f6-a7a5-41f0fa7294d2   What it does: ·         Creates a DGML graph to visualize linked TFS Work Items by entering a Work Item ID in the toolbar search box   How it benefits you: ·         Allows you to easily analyze the hierarchy of your TFS Work Items ·         Gain the ability to perform basic risk/impact analysis when creating or editing Work Items ·         Great for meetings in the case that you need to discuss the entire scope of linked Work Items ·         Easier project planning ·         Eliminates the need to create TFS queries or reports to view tree of Work Items ·         Easily lets you see the entire tree of work items linked to the one you’re working on   Navigation Tips: ·         Use Ctrl + Mouse Wheel Scroll to zoom in and out ·         Use Ctrl + Left Mouse click (and hold) to move document around ·         Right click on DGML area for more options (Like copy image or viewing in groups) ·         Clicking on each node highlights that node and the links connected to it ·         Colors in the legend can be changed ·         When work item nodes are deleted, the view is automatically updated ·         Double clicking on work item node will open up the Work Items URL   Try it out on work items that have several of links and let us know what you think. A big thanks goes out to everyone working on the http://visualization.codeplex.com/ project for publishing the source code on CodePlex which really helped me learn how DGML (Directed Graph Markup Language - New to Visual Studio 2010 Architecture Tools) works!    - Mike

    Read the article

  • How to undelete files in TFS

    - by Tarun Arora
    Have you accidently deleted files from TFS and are looking at a way to undelete the file? You don’t have to undo your previous check in to get the files back, there is a simpler way. 01 – View Deleted items in Team Explorer Have you been wondering how you can view deleted items in Team Explorer? Well, go to tools, options, Source Control. From Visual Studio Team Foundation check ‘show deleted items in the Source Control Explorer’.  02 – Undelete files from TFS Simply right click the deleted file or folder and from the context menu select ‘Undelete’. This will roll back the files to the version before the delete operation was committed on them.  The undeleted changes now show up as pending changes in your workspace. You need to right click the folder and select Check In Pending changes from the context menu to restore the files. Add a comment and check in the files back to TFS to undelete them Right click the folder and view history. You’ll see both the check in that deleted the file/folder and the check in that restored it. So, that’s how you can restoring deleted files in TFS… Nice and simple… Right?

    Read the article

  • Cannot start TFS Build service: Error 1227

    - by Joni
    When I try to start the TFS 2008 Build service on the port 9191 I get the following error message: Windows could not start the Visual Studio Team Foundation Build service on Local Computer. Error 1227: The network transport endpoint already has an address associated with it. If I use another port it works, but I need it to be the default, 9191. I tried using the following commands wcfhttpconfig.exe free 9191 wcfhttpconfig.exe reserve Domain\ServiceAccount 9191 Both commands succeeses, but the service does not start. I will appreciate any help!

    Read the article

  • TFS webtest for SSRS reports issue

    - by durilai
    I am creating webtests in TFS and trying to test reports execution in SSRS. When I record the initial process, it includes Reserved.ReportViewerWebControl.axd files. These files are what is causing the problem. When I remove the files, the report does not display, however if I keep the AXD files in it works fine. The problem with keeping the AXD files is the reportsession querystring variable that are included. If I run the report after a bit the reportsession has obviously changed. Any help is appreciated.

    Read the article

  • TFS Build Problem

    - by kumar
    c:\Windows\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(0,0): warning MSB3245: Could not resolve this reference. Could not locate the assembly "System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. I am getting this Error Mesage when I build my MVC application.. Do I need to give any additionalrefercepath in TFS.proj file? <ItemGroup> <!-- ADDITIONAL REFERENCE PATH The list of additional reference paths to use while resolving references. For example: <AdditionalReferencePath Include="C:\MyFolder\" /> <AdditionalReferencePath Include="C:\MyFolder2\" /> --> </ItemGroup> Basically its saying some .dll file is missing? thanks

    Read the article

  • Programatically find TFS changes since last good build

    - by abigblackman
    I have several branches in TFS (dev, test, stage) and when I merge changes into the test branch I want the automated build and deploy script to find all the updated SQL files and deploy them to the test database. I thought I could do this by finding all the changesets associated with the build since the last good build, finding all the sql files in the changesets and deploying them. However I don't seem to be having the changeset associated with the build for some reason so my question is twofold: 1) How do I ensure that a changeset is associated with a particular build? 2) How can I get a list of files that have changed in the branch since the last good build? I have the last successfully built build but I'm unsure how to get the files without checking the changesets (which as mentioned above are not associated with the build!)

    Read the article

  • Building java project with TFS

    - by Jaco Pretorius
    A very small portion of our codebase is some legacy Java code. I'm trying to add a new build that would invoke ant to build this project. The first problem is that TFS doesn't allow you to create a build that doesn't build a .Net solution. I got around this by copying a previous build file and adding an EndToEndIteration task which is the entry point for the build. The problem is that none of the usual build variables are populated - $(BuildDirectory), $(SolutionRoot) - all blank. This pretty much means I can't invoke my ant task without hardcoding the paths (which I definitely can't do). Any ideas?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >