Search Results

Search found 5775 results on 231 pages for 'tfs 2005'.

Page 115/231 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • SQL Reporting Services 2005 - Date field based on a user entered date?

    - by Pierce
    Hi, I have a report in report services 2005 that has two date fields. The problem is that if users run this for a large section of time it uses too much resources on our server. It is possible to only allow the end user to enter the start date and then the end date be auto populated/derived from this field (for example they enter the 1st of a month and this automatically change the end date to the last of a month.)

    Read the article

  • Installing Team Foundation Server

    - by vzczc
    What are the best practices in setting up a new instance of TFS 2008 Workgroup edition? Specifically, the constraints are as follows: Must install on an existing Windows Server 2008 64 bit TFS application layer is 32 bit only Should I install SQL Server 2008, Sharepoint and the app layer in a virtual instance of Windows Server 2008 or 2003(I am already running Hyper-V) or split the layers with a database on the host OS and the app layer in a virtual machine? Edit: Apparently, splitting the layers is not recommended

    Read the article

  • How should I proceed to upgrade from TFS2008 to TFS2010

    - by Stephane
    Hi, We have a TFS2008 server with multiple team projects (about 20) What is the best way to migrate to TFS2010 without loosing the history. I believe there are 2 ways, correct me if I'm wrong: installing a fresh tfs2010 and importing the DB from TFS 2008 or in-place upgrade. What is your recommendation and why? Are there any issue I should be prepared to face? Any advice is welcome.

    Read the article

  • VS 2010 Team Foundation Server Issue with SharePoint

    - by Brian
    Hello, I'm trying to setup VS 2010 Team Foundation project in TFS 2010. When I go to create the sharepoint site for the project, it errors saying I don't have permissions... what permissions do I need to grant, and is it a windows permission or within the sharepoint application? Thanks.

    Read the article

  • Enums for Build Flavor and Build Platform in Custom TFS 2010 Build Activities

    - by Ben Hughes
    Are there enums available in the .NET framework that have values for build flavor (Debug, Release) and build platform (Any CPU, x86, x64 etc)? I haven't been able to find anything on MSDN or Google. It seems unnecessarily cumbersome to create my own. For context: I'm creating a custom TFS2010 workflow activity that requires flavor and platform info. Currently these are entered in the build definition as free-from strings. The default TFS build template has a dialog box (accessible in the build definition editor under Process\1.Required\Items to Build\Configurations to Build) that provides drop-down menus with this info pre-populated. I'd like to do something similar.

    Read the article

  • TFS 2008 Build Script

    - by pm_2
    In TFS 2008, I am trying to modify a build script (TFSBuild.proj). I get the following warning: The element 'PropertyGroup' in namespace 'http://schemas.microsoft.com/developer/msbuild/2003' has invalid child element 'TeamProject' in namespace 'http://schemas.microsoft.com/developer/msbuild/2003'. Which is correct, The element PropertyGroup does indeed have a child called TeamProject. I’m making an assumption that this is caused because of the following line: <Project DefaultTargets="DesktopBuild" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5"> The Xml Namespace doesn’t appear to exist as far as I can tell, although it looks like a standard one. Please can anyone tell me if this is a standard XML namespace, how or where I can view its contents and whether the warning that I am seeing may be caused by it?

    Read the article

  • Connecting to TFS via ASP.NET, jQuery and Phonegap

    - by Tony D.
    I want to develop a mobile application that would allow the user the ability to run a suite of automated test cases housed in TFS. This is something I thought of this morning so it's all still very preliminary. But I'm curious as to what would be the best route for something like this? Or if it's even possible? Because the mobile devices from the users will vary from iphones to droids, I would probably want to incorporate something like Phonegap for it's cross platform capabilities. My initial thought was to develop in ASP.net/C# (which would be stored on a remote server), and have jQuery make calls to that server. Not sure if that would be the most appropriate way of handling this. I'm not too familiar with JSON but I have seen it as a suggestion on various sites to handle the returned data. Any thoughts or suggestions are appreciated. Thanks.

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Use VersionControlExt.Explorer outside Visual Studio

    - by Ian
    Hi All, I'm developing a TFS tool to assist the developers in our company. This said tool needs to be able to "browse" the TFS server like in the Source Control Explorer. I believe that by using VersionControlExt.Explorer.SelectedItems, a UI will pop-up that will enable the user to browse the TFS server (please correct me if I'm wrong). However, VersionControlExt is only accessible when developing inside Visual Studio (aka Plugin). Unfortunately, I am developing a Windows Application that won;t run inside VS. So the question is, Can I use VersionControlExt outside of Visual Studio? If yes, how? Here's an attempt on using the Changset Details Dialog outside of Visual Studio string path = System.IO.Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location); Assembly vcControls = Assembly.LoadFile(path + @"\Microsoft.TeamFoundation.VersionControl.Controls.dll"); Assembly vcClient = Assembly.LoadFile(path + @"\Microsoft.TeamFoundation.VersionControl.Client.dll"); Type dialogChangesetDetailsType = vcControls.GetType("Microsoft.TeamFoundation.VersionControl.Controls.DialogChangesetDetails",true); Type[] ctorTypes = new Type[3] {vcClient.GetType("Microsoft.TeamFoundation.VersionControl.Client.VersionControlSever"), vcClient.GetType("Microsoft.TeamFoundation.VersionControl.Client.Changeset"), typeof(System.Boolean)}; ConstructorInfo ctorInfo = dialogChangesetDetailsType.GetConstructor(ctorTypes); Object[] ctorObjects = new Object[3] {VersionControlHelper.CurrentVersionControlServer, uc.ChangeSet, true}; Object oDialog = ctorInfo.Invoke(ctorObjects); dialogChangesetDetailsType.InvokeMember("ShowDialog", BindingFlags.InvokeMethod, null, oDialog, null);

    Read the article

  • QueryHistory against a codeplex project hangs indefinitely

    - by Robaticus
    I'm working on a TFS utility that gets the changesets for a particular project in TFS. I've got a home TFS 2010 server which I primarily use for testing, but I decided to give it a try against a codeplex project to which I contribute. That way, I can test functionality against a larger number of changesets than I have locally. While it works fine in my environment, heading out over the wire to codeplex has left me stumped. My application queries the history, but then, when trying to iterate through the history (which is when it lazy-loads the IEnumerable), my application hangs. Looking at Intellitrace, I see a couple of "first chance" exceptions that the "item doesn't exist at the specified version"-- which is patently not true, as I'm trying to get history for "$/" at VersionSpec.Latest. I also see two or three consecutive server 500 errors being returned to me after forcing debugging to pause. Other operations (like GetItems() ) work fine, so I'm pretty sure authentication isn't an issue. Any thoughts? Here's the code: IEnumerable items = vcs.QueryHistory("$/", VersionSpec.Latest, 1, RecursionType.None, null, null, null, 5, true, false); List<ChangesetItem> returnList = new List<ChangesetItem>(); foreach (Changeset cs in items) //hangs here on first iteraiton { ChangesetItem newItem = new ChangesetItem() { ChangesetId = cs.ChangesetId, //ChangesetNote = cs.CheckinNote.Values[0].Value, Comment = cs.Comment, Committer = cs.Committer, CreationDate = cs.CreationDate }; returnList.Add(newItem); }

    Read the article

  • TFS Disk Structure - and "Add new folder" vs "Add solution"

    - by NealWalters
    Our organization recently got TFS 2008 set up ready for our use. I have a practice TeamProject available to play with. To simplify slightly, we previous organized our code on disk like this: -EC - Main - Database - someScript1.sql - someScript2.sql - Documents - ReleaseNotes_V1.doc - Source - Common - Company.EC.Common.Biztalk.Artifacts [folder] - Company.EC.Common.BizTalk.Components [folder] - Company.EC.Common.Biztalk.Deployment [folder] - Company.EC.BookTransfer.BizTalk.sln - BookTransfer - Company.EC.BookTransfer.BizTalk.Artifacts [folder] - Company.EC.BookTransfer.BizTalk.Components [folder] - Company.EC.BookTransfer.BizTalk.Components.UnitTest [folder] - Company.EC.BookTransfer.BizTalk.Deployment [folder] - Company.EC.BookTransfer.BizTalk.sln I'm trying to decide, do I want to check in the entire c:\EC directory? Or do I want to open each solution and checkin. What are the pros and cons of each? It seems like by doing the "Add Files/Folder" option, I could check in everything at once and it would match the disk structure. It also looks like that if I check in each solution separately, that creates another working folder in my Workspace. I think if I check in by "add files/folder", I will have one workspace and that would be better. But most of the books and samples I see talk about checking in projects and solutions. P.S. I know I need to add more to my disk structure in accordance with the Branch/Merge guidelines, but that is not the question I'm asking here. Thanks, Neal Walters

    Read the article

  • If a user is part of two TFS security groups, why do they (appear to) receive the lesser security of the two?

    - by Jedidja
    Given two TFS security groups Admins: Contains a set of Windows users Friends: Contains a Windows Security Group (which is also used as a mailing list) However, the people listed as admins are also part of the security group. It appears that when I lock down the Friends group to certain directories in TFS, the people in Admin also lose their privileges. Is there any way for users to receive the maximum security allowed between multiple groups they are included in? Or have I perhaps setup my TFS security groups incorrectly?

    Read the article

  • Plastic SCM vs. Mercurial? Need Source Control for Visual Studio 2005 on Windows 7

    - by Pete Alvin
    1) Has anyone used Plastic SCM? Is it reliable? 2) How does it compare with Mercurial? (It seems like this is a good candidate for DVCS on Windows. I tried Git and really didn't like it.) 3) I really like TortoiseSVN. I like a central model because of the piece of mind that if it's in the respository it's "safe" and tracked. Here is the question: Is the excitement over distributed version control (DVCS) worth the hype? My environment: Windows 7 Windows development (Dev. Studio 2005, SQL Server 2003); integration would be nice Two developers sharing same code push code to production servers almost daily

    Read the article

  • How to determine where a Mailitem is being opened from in Outlook 2007 using VSTO 2005 SE

    - by Adam
    I have an Outlook 2007 Add-in in VSTO 2005 SE that allows users to save e-mails into our document management system. From within our system users are able to open e-mails they have previously saved. However, when doing so I need to try and prevent them from saving them again. I am trying to figure out how to determine if the Mailitem being opened is coming from the Outlook e-mail client or from an external source. I know that normally the EntryId Property of the Mailitem is null or empty string when a Mailitem has not been previously saved in Outlook, however, it seems like when a Mailitem is being opened from within our system the EntryID is not null.

    Read the article

  • Recommended ways to install USB drivers with a Visual Studio 2005 Setup Project?

    - by tjmoore
    I need to install a USB driver with an application, and I'm using a Visual Studio 2005 Setup Project to create the installer. The driver only needs to be installed sufficient enough so that when the USB device is plugged in, Windows will go off doing it's "installing device" routine and do the rest of the job. It would be okay also to have the setup finish and then the user connects the device when required with the driver install completing then. However the user shouldn't be prompted to find a driver location. The USB drivers I have are available either as plain .sys / .inf files, or as a full installer (.msi together with a setup.exe wrapper). The full installer deals with combinations of operating systems and languages, but the application is for internal use and I can limit the target OS to Windows XP. Would it be better just to run the available installer via a custom action, or to install via the .inf file somehow (I'm not sure how to do that)?

    Read the article

  • Can I add an existing 2008 build server to new TFS 2010 server ?

    - by driis
    My scenario is this: I am currently testing out a new Team Foundation Server 2010 installation; which we will be moving to shortly. Upgrading builds to work with TFS 2010 and the new MSBuild seems like a lot of work (it does not work out-of-the-box, at least). So what I would like to do, is to repurpose our old TFS Server to be a build server for TFS 2010. It already has build services installed. I cannot figure out how to add an existing TFS 2008 Build Server to my new TFS 2010 installation, so I can use the old server to run old builds. Is this possible ? How can I do it ?

    Read the article

  • When building a web Application project, TFS 2008 Builds two spearate projects in the _PublishedFold

    - by Steve Johnson
    Hi all, I am trying to a perform build automation on one of web application projects built using VS 2008. The _PublishedWebSites contains two folders: Web and Deploy. I just want the TFS 2008 to generate only the Deploy Folder and Not the Web Folder. Here is my TFSBuild.proj File <Project ToolsVersion="3.5" DefaultTargets="Compile" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" /> <Import Project="$(MSBuildExtensionsPath)\Microsoft\WebDeployment\v9.0\Microsoft.WebDeployment.targets" /> <ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Development/Main/MySoftware.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|AnyCPU"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>Any CPU</PlatformToBuild> </ConfigurationToBuild> </ItemGroup> <!--<ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Development/Main/MySoftware.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|x64"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>x64</PlatformToBuild> </ConfigurationToBuild> </ItemGroup>--> <ItemGroup> <AdditionalReferencePath Include="C:\3PR" /> </ItemGroup> <Target Name="GetCopyToOutputDirectoryItems" Outputs="@(AllItemsFullPathWithTargetPath)" DependsOnTargets="AssignTargetPaths;_SplitProjectReferencesByFileExistence"> <!-- Get items from child projects first. --> <MSBuild Projects="@(_MSBuildProjectReferenceExistent)" Targets="GetCopyToOutputDirectoryItems" Properties="%(_MSBuildProjectReferenceExistent.SetConfiguration); %(_MSBuildProjectReferenceExistent.SetPlatform)" Condition="'@(_MSBuildProjectReferenceExistent)'!=''"> <Output TaskParameter="TargetOutputs" ItemName="_AllChildProjectItemsWithTargetPathNotFiltered"/> </MSBuild> <!-- Remove duplicates. --> <RemoveDuplicates Inputs="@(_AllChildProjectItemsWithTargetPathNotFiltered)"> <Output TaskParameter="Filtered" ItemName="_AllChildProjectItemsWithTargetPath"/> </RemoveDuplicates> <!-- Target outputs must be full paths because they will be consumed by a different project. --> <CreateItem Include="@(_AllChildProjectItemsWithTargetPath->'%(FullPath)')" Exclude= "$(BuildProjectFolderPath)/../../Development/Main/Web/Bin*.pdb; *.refresh; *.vshost.exe; *.manifest; *.compiled; $(BuildProjectFolderPath)/../../Development/Main/Web/Auth/MySoftware.dll; $(BuildProjectFolderPath)/../../Development/Main/Web/BinApp_Web_*.dll;" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='Always' or '%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'" > <Output TaskParameter="Include" ItemName="AllItemsFullPathWithTargetPath"/> <Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectoryAlways" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='Always'"/> <Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectory" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'"/> </CreateItem> </Target> <!-- To modify your build process, add your task inside one of the targets below and uncomment it. Other similar extension points exist, see Microsoft.WebDeployment.targets. <Target Name="BeforeBuild"> </Target> <Target Name="BeforeMerge"> </Target> <Target Name="AfterMerge"> </Target> <Target Name="AfterBuild"> </Target> --> </Project> I want to build everything that the builtin Deploy project is doing for me. But i dont want the generated Web Project as it conatains App_Web_xxxx.dll assemblies instead of a single compiled assembly. Please help. Thanks

    Read the article

  • Merging deletes in a Team Foundation Server baseless merge

    - by Justin Dearing
    I have two TFS branches that do not have a direct parent/child relationship in TFS. In a certain revision, 94 in my example, several items were deleted. I have been tasked with applying those deletes to the main branch. I'd like to do so through a baseless merge. I tried the following command to do so: tf merge /baseless /recursive /version:94 .\programs\program1 ..\Release\programs\program1 Most of the items in the tree were marked as "merge", and some were marked as "merge edit". However, none of the items were deleted at the destination. On a whim i tried to merge over a single delete like so: tf merge /baseless /recursive /version:94 .\programs\program1\source1.cs ..\Release\programs\program1\source1.cs I got the following error message: The item [TFS_PATH] does not exist at the specified version. How do I do this? Is there a way to avoid making all those deletes myself?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >