Search Results

Search found 18433 results on 738 pages for 'tfs power tools'.

Page 19/738 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Home networking problem between power line communication and Ethernet

    - by pixeline
    My network runs through the electrical wiring of the house and is organised as such: Groundfloor: an ADSL+network switch, using DHCP (address : 172.19.3.1) (Mac) PCs connected via an electrical adapter (model: D-Link DHP-200) (1 per PC) First Floor: 1 switch (8 ports) connected via an electrical adapter (model: D-Link DHP-200) (address unknown) 2 Mac PCs connected (via RJ45 network wires) to that router using DHCP The Problem On the first floor, file tranfers between PCs are fast and perfect. But if I try to transfer files from or to a computer on the ground floor, the speed is slow and eventually the transfer dies out. The Question So I suspect the 1st floor switch is creating some kind of barrier (firewall?) preventing external PCs from accessing the PCs it is connected to? Am I right and if so, how could I disable that barrier?

    Read the article

  • Most basic, low power home surveillance system

    - by cbp
    I am thinking of setting up a simple but effective surveillance system for my house that is: Very low powered (preferably no PCs left running out of stand-by mode) Cheap. When motion (or sound) is detected, I would like it to: Send an email/phone alert to me Record and upload video to the web (in case they steal the camera) So I imagine a system where I leave a netbook PC in stand-by mode and have it woken up by a motion detector. This initiates software to send alerts and periodically upload recorded video to the web. The software part is easy for me, but I'm not really a gadget-man so I'd like some advice on using a motion sensor of some sort to wake up the PC. Does anyone have some good advice? I know there are a couple of questions dealing with this topic already (see here: http://superuser.com/questions/3054/looking-for-a-moderately-priced-home-surveillance-setup, and here: http://superuser.com/questions/2929/can-you-suggest-a-great-home-security-setup-anti-burglars-e-t-c) - I am seeking more specific information with this question.

    Read the article

  • XFS and loss of data when power goes down

    - by culebrón
    Each time electricity goes down, my desktop (without UPS) loses some temporary information. Opera can lose settings, history, cache, or mail accounts (Thanks heavens I was wise to use IMAP). Partially or all together. a whole file (complete and save) in Geany appeared empty (and I didn't commit it to Git) rhythmbox lost all podcasts subscription data I'm afraid there are other losses I just didn't see. What's the reason? A memory files cache, a mem-disk? Or non-atomic file writes in xfs? I have Ubuntu 9.10 and XFS on both / and /home partitions. Is ext4 safer in such circumstances? I've seen ext3 is faster. Is it as safe as *4? Given that the apartment I rent is connected to a common bus and 1 safety switch for several apartments, and the neighbors - alone or together - overload it at least once every week, the lights go down often enough for this to be an issue.

    Read the article

  • Computer Won't Boot After New Power Supply

    - by Haxelgem
    I recently bought a Rosewill 750-M PSU to replace an Antec Earthwatts 500. My computer worked fine before making the switch. After installing the Rosewill PSU, I attempted to boot. Fans spun up for a second, then turned off. After a second, they spun up again and stayed on. However, I had no video output. I put the old Antec PSU back in, and still had the same problem. I put another GPU in the computer, trying both PSUs, and still nothing. I switched out the monitor also, which didn't work. What should I do?

    Read the article

  • UPS for hard drive protection

    - by dimi
    I am in a place where electricity is not ideal (old house, no ground), sometimes it occasionally shuts down and supposedly there are some spikes. I consider using UPS with the goal to increase safety of my personal data. My first priority is the health of my internal and external USB hard drives which can be damaged due to possible power instability. I do not care that much about possible losses of not-saved work, instead I just want to let my system have a minimum time to turn off without any risk of physical damaging my hard drives. Would a cheap offline UPS suit my neads? Or do i need a better one with automatic voltage regulator (AVR)? How critical is AVR for the hard drives? The external ones require their own power supplies and will be plugged directly into UPS.

    Read the article

  • Automatically disable devices to save power and mitigate DMA attack in Windows 7

    - by Martheen Cahya Paulo
    Some OEM include energy saving apps that can switch off certain devices such as webcam or optical drive. Is there any brand-agnostic app out there that can do it? If the list of disabled device is customizable, it would be useful too for mitigating DMA attack (disabling Firewire, PCMCIA, SDIO, Thunderbolt, etc). Even better if it can recognize lock/logoff event, to mimic OSX behavior in mitigating the DMA attack.

    Read the article

  • Linux: disbale USB without disabling power

    - by Ergot
    TLDR I want toggle between the following usages of a usb-port via the terminal: use like a normal usb-port only supply energy to charge Story I recently got me something like a magna doodle that can save your drawings to pdf, which can be moved to your computer via usb afterwards. Now the thing is that you can't save anything while it's plugged in. Because it's the only way to charge it, it bugs me that I can't find a software solution and laziness I want to keep it plugged in and toggle the connection to the computer only when needed. I noticed that it's charging and usable when it is plugged in and the computer is shut down or suspened. So I guess that there's a way to do it. Tech info computer: ThinkPad X201 Linux Kernel: 3.14.5-1-ARCH "Magna doodle": Boogie Board Sync

    Read the article

  • Using a 20V power block on a 19V notebook

    - by user4444
    Is that dangerous : for the computer (without the battery) for the cells If possible, explain why. Edit : Here are some more assumptions : Without the battery included, there is no risk of overheating the cell, or over charging them. But there is still some dc to dc conversion taking place on the motherboard. I assume this dc to dc stage to be quite tolerant. What kind of trouble can I run into when using 20V instead of 19V ? Overheating ?

    Read the article

  • Using a 20V power block on a 19V notebook

    - by user4444
    Is that dangerous : for the computer (without the battery) for the cells If possible, explain why. Edit : Here are some more assumptions : Without the battery included, there is no risk of overheating the cell, or over charging them. But there is still some dc to dc conversion taking place on the motherboard. I assume this dc to dc stage to be quite tolerant. What kind of trouble can I run into when using 20V instead of 19V ? Overheating ?

    Read the article

  • What are good CLI tools for JSON?

    - by jasonmp85
    General Problem Though I may be diagnosing the root cause of an event, determining how many users it affected, or distilling timing logs in order to assess the performance and throughput impact of a recent code change, my tools stay the same: grep, awk, sed, tr, uniq, sort, zcat, tail, head, join, and split. To glue them all together, Unix gives us pipes, and for fancier filtering we have xargs. If these fail me, there's always perl -e. These tools are perfect for processing CSV files, tab-delimited files, log files with a predictable line format, or files with comma-separated key-value pairs. In other words, files where each line has next to no context. XML Analogues I recently needed to trawl through Gigabytes of XML to build a histogram of usage by user. This was easy enough with the tools I had, but for more complicated queries the normal approaches break down. Say I have files with items like this: <foo user="me"> <baz key="zoidberg" value="squid" /> <baz key="leela" value="cyclops" /> <baz key="fry" value="rube" /> </foo> And let's say I want to produce a mapping from user to average number of <baz>s per <foo>. Processing line-by-line is no longer an option: I need to know which user's <foo> I'm currently inspecting so I know whose average to update. Any sort of Unix one liner that accomplishes this task is likely to be inscrutable. Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us. Previously, I had gotten accustomed to using the wonderful XML::XPath Perl module to accomplish queries like the one above, but after finding a TextMate Plugin that could run an XPath expression against my current window, I stopped writing one-off Perl scripts to query XML. And I just found out about XMLStarlet which is installing as I type this and which I look forward to using in the future. JSON Solutions? So this leads me to my question: are there any tools like this for JSON? It's only a matter of time before some investigation task requires me to do similar queries on JSON files, and without tools like XPath and XSLT, such a task will be a lot harder. If I had a bunch of JSON that looked like this: { "firstName": "Bender", "lastName": "Robot", "age": 200, "address": { "streetAddress": "123", "city": "New York", "state": "NY", "postalCode": "1729" }, "phoneNumber": [ { "type": "home", "number": "666 555-1234" }, { "type": "fax", "number": "666 555-4567" } ] } And wanted to find the average number of phone numbers each person had, I could do something like this with XPath: fn:avg(/fn:count(phoneNumber)) Questions Are there any command-line tools that can "query" JSON files in this way? If you have to process a bunch of JSON files on a Unix command line, what tools do you use? Heck, is there even work being done to make a query language like this for JSON? If you do use tools like this in your day-to-day work, what do you like/dislike about them? Are there any gotchas? I'm noticing more and more data serialization is being done using JSON, so processing tools like this will be crucial when analyzing large data dumps in the future. Language libraries for JSON are very strong and it's easy enough to write scripts to do this sort of processing, but to really let people play around with the data shell tools are needed. Related Questions Grep and Sed Equivalent for XML Command Line Processing Is there a query language for JSON? JSONPath or other XPath like utility for JSON/Javascript; or Jquery JSON

    Read the article

  • Configure TFS portal afterwards

    Update #1 January 8th, 2010: There is an updated post on this topic for Beta 2: http://www.ewaldhofman.nl/post/2009/12/10/Configure-TFS-portal-afterwards-Beta-2.aspx Update #2 October 10th, 2010: In the new Team Foundation Server Power Tools September 2010, there is now a command to create a portal. tfpt addprojectportal   Add or move portal for an existing team project Usage: tfpt addprojectportal /collection:uri                              /teamproject:"project name"                              /processtemplate:"template name"                              [/webapplication:"webappname"]                              [/relativepath:"pathfromwebapp"]                              [/validate]                              [/verbose] /collection Required. URL of Team Project Collection. /teamproject Required. Specifies the name of the team project. /processtemplate Required. Specifies that name of the process template. /webapplication The name of the SharePoint Web Application. Must also specify relativepath. /relativepath The path for the site relative to the root URL for the SharePoint Web Application. Must also specify webapplication. /validate Specifies that the user inputs are to be validated. If specified, only validation will be done and no portal setting will be changed. /verbose Switches on the verbose mode. I created a new Team Project in TFS 2010 Beta 1 and choose not to configure SharePoint during the creation of the Team Project. Of course I found out fairly quickly that a portal for TFS is very useful, especially the Iteration and the Product backlog workbooks and the dashboard reports. This blog describes how you can configure the sharepoint portal afterwards. Update: September 9th, 2009 Adding the portal afterwards is much easier as described below. Here are the steps Step 1: Create a new temporary project (with a SharePoint site for it). Open the Team Explorer Right click in the Team Explorer the root node (i.e. the project collection) Select "New team project" from the menu Walk throught he wizard and make sure you check the option to create the portal (which is by default checked) Step 2: Disable the site for the new project Open the Team Explorer Select the team project you created in step 1 In the menu click on Team -> Show Project Portal. In the menu click on Team -> Team Project Settings -> Portal Settings... The following dialog pops up Uncheck the option "Enable team project portal" Confirm the dialog with OK Step 3: Enable the site for the original one. Point it to the newly created site. Open the Team Explorer Select the team project you want to add the portal to In the menu open Team -> Team Project Settings -> Portal Settings... The same dialog as in step 2 pops up Check the option "Enable team project portal" Click on the "Configure URL" button The following dialog pops up   In the dialog select in the combobox of the web application the TFS server Enter in the Relative site path the text "sites/[Project Collection Name]/[Team Project Name created in step 1]" Confirm the "Specify an existing SharePoint Site" with OK Check the "Reports and dashboards refer to data for this team project" option Confirm the dialog "Project Portal Settings" with OK Step 4: Delete the temporary project you created. In Beta 1, I have found no way to delete a team project. Maybe it will be available in TFS 2010 Beta 2. Original post Step 1: Create new portal site Go to the sharepoint site of your project collection (/sites//default.aspx">/sites//default.aspx">http://<servername>/sites/<project_collection_name>/default.aspx) Click on the Site Actions at the left side of the screen and choose the option Site Settings In the site settings, choose the Sites and workspaces option Create a new site Enter the values for the Title, the description, the site address. And choose for the TFS2010 Agile Dashboard as template. Create the site, by clicking on the Create button Step 2: Integrate portal site with team project Open Visual Studio Open the Team Explorer (View -> Team Explorer) Select in the Team Explorer tool window the Team Project for which you are create a new portal Open the Project Portal Settings (Team -> Team Project Settings -> Portal Setings...) Check the Enable team project portal checkbox Click on Configure URL... You will get a new dialog as below Enter the url to the TFS server in the web application combobox And specify the relative site path: sites/<project collection>/<site name> Confirm with OK Check in the Project Portal Settings dialog the checkbox "Reports and dashboards refer to data for this team project" Confirm the settings with OK (this takes a while...) When you now browse to the portal, you will see that the dashboards are now showing up with the data for the current team project. Step 3: Download process template To get a copy of the documents that are default in a team project, we need to have a fresh set of files that are not attached to a team project yet. You can do that with the following steps. Start the Process Template Manager (Team -> Team Project Collection Settings -> Process Template Manager...) Choose the Agile process template and click on download Choose a folder to download Step 4: Add Product and Iteration backlog Go to the Team Explorer in Visual Studio Make sure the team project is in the list of team projects, and expand the team project Right click the Documents node, and choose New Document Library Enter "Shared Documents", and click on Add Right click the Shared Documents node and choose Upload Document Go the the file location where you stored the process template from step 3 and then navigate to the subdirectory "Agile Process Template 5.0\MSF for Agile Software Development v5.0\Windows SharePoint Services\Shared Documents\Project Management" Select in the Open Dialog the files "Iteration Backlog" and "Product Backlog", and click Open Step 5: Bind Iteration backlog workbook to the team project Right click on the "Iteration Backlog" file and select Edit, and confirm any warning messages Place your cursor in cell A1 of the Iteration backlog worksheet Switch to the Team ribbon and click New List. Select your Team Project and click Connect From the New List dialog, select the Iteration Backlog query in the Workbook Queries folder. The final step is to add a set of document properties that allow the workbook to communicate with the TFS reporting warehouse. Before we create the properties we need to collect some information about your project. The first piece of information comes from the table created in the previous step.  As you collect these properties, copy them into notepad so they can be used in later steps. Property How to retrieve the value? [Table name] Switch to the Design ribbon and select the Table Name value in the Properties portion of the ribbon [Project GUID] In the Visual Studio Team Explorer, right click your Team Project and select Properties.  Select the URL value and copy the GUID (long value with lots of characters) at the end of the URL [Team Project name] In the Properties dialog, select the Name field and copy the value [TFS server name] In the Properties dialog, select the Server Name field and copy the value [UPDATE] I have found that this is not correct: you need to specify the instance of your SQL Server. The value is used to create a connection to the TFS cube. Switch back to the Iteration Backlog workbook. Click the Office button and select Prepare – Properties. Click the Document Properties – Server drop down and select Advanced Properties. Switch to the Custom tab and add the following properties using the values you collected above. Variable name Value [Table name]_ASServerName [TFS server name] [Table name]_ASDatabase tfs_warehouse [Table name]_TeamProjectName [Team Project name] [Table name]_TeamProjectId [Project GUID] Click OK to close the properties dialog. It is possible that the Estimated Work (Hours) is showing the #REF! value. To resolve that change the formula with: =SUMIFS([Table name][Original Estimate]; [Table name][Iteration Path];CurrentIteration&"*";[Table name][Area Path];AreaPath&"*";[Table name][Work Item Type]; "Task") For example =SUMIFS(VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Original Estimate]; VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Iteration Path];CurrentIteration&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Area Path];AreaPath&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Work Item Type]; "Task") Also the Total Remaining Work in the Individual Capacity table may contain #REF! values. To resolve that change the formula with: =SUMIFS([Table name][Remaining Work]; [Table name][Iteration Path];CurrentIteration&"*";[Table name][Area Path];AreaPath&"*";[Table name][Assigned To];[Team Member];[Table name][Work Item Type]; "Task") For example =SUMIFS(VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Remaining Work]; VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Iteration Path];CurrentIteration&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Area Path];AreaPath&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Assigned To];[Team Member];VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Work Item Type]; "Task") Save and close the workbook. Step 6: Bind Product backlog workbook to the team project Repeat the steps for binding the Iteration backlog for thiw workbook too. In the worksheet Capacity, the formula of the Storypoints might be missing. You can resolve it with: =IF([Iteration]="";"";SUMIFS([Table name][Story Points];[Table name][Iteration Path];[Iteration]&"*")) Example =IF([Iteration]="";"";SUMIFS(VSTS_487f1e4c_db30_4302_b5e8_bd80195bc2ec[Story Points];VSTS_487f1e4c_db30_4302_b5e8_bd80195bc2ec[Iteration Path];[Iteration]&"*"))

    Read the article

  • Adding Fake Build Information in TFS 2010

    - by Jakob Ehn
    We have been using TFS 2010 build for distributing a build in parallel on several agents, but where the actual compilation is done by a bunch of external tools and compilers, e.g. no MSBuild involved. We are using the ParallelTemplate.xaml template that Jim Lamb blogged about previously, which distributes each configuration to a different agent. We developed custom activities for running these external compilers and collecting the information and errors by reading standard out/error and pushing it back to the build log. But since we aren’t using MSBuild we don’t the get nice configuration summary section on the build summary page that we are used to. We would like to show the result of each configuration with any errors/warnings as usual, together with a link to the log file. TFS 2010 API to the rescue! What we need to do is adding information to the InformationNode structure that is associated with every TFS build. The log that you normally see in the Log view is built up as a tree structure of IBuildInformationNode objects. This structure can we accessed by using the InformationNodeConverters class. This class also contain some helper methods for creating BuildProjectNode, which contain the information about each project that was build, for example which configuration, number of errors and warnings and link to the log file. Here is a code snippet that first creates a “fake” build from scratch and the add two BuildProjectNodes, one for Debug|x86 and one for Release|x86 with some release information:   TfsTeamProjectCollection collection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://lt-jakob2010:8080/tfs")); IBuildServer buildServer = collection.GetService<IBuildServer>(); var buildDef = buildServer.GetBuildDefinition("TeamProject", "BuildDefinition"); //Create fake build with random build number var detail = buildDef.CreateManualBuild(new Random().Next().ToString()); // Create Debug|x86 project summary IBuildProjectNode buildProjectNode = detail.Information.AddBuildProjectNode(DateTime.Now, "Debug", "MySolution.sln", "x86", "$/project/MySolution.sln", DateTime.Now, "Default"); buildProjectNode.CompilationErrors = 1; buildProjectNode.CompilationWarnings = 1; buildProjectNode.Node.Children.AddBuildError("Compilation", "File1.cs", 12, 5, "", "Syntax error", DateTime.Now); buildProjectNode.Node.Children.AddBuildWarning("File2.cs", 3, 1, "", "Some warning", DateTime.Now, "Compilation"); buildProjectNode.Node.Children.AddExternalLink("Log File", new Uri(@"\\server\share\logfiledebug.txt")); buildProjectNode.Save(); // Create Releaes|x86 project summary buildProjectNode = detail.Information.AddBuildProjectNode(DateTime.Now, "Release", "MySolution.sln", "x86", "$/project/MySolution.sln", DateTime.Now, "Default"); buildProjectNode.CompilationErrors = 0; buildProjectNode.CompilationWarnings = 0; buildProjectNode.Node.Children.AddExternalLink("Log File", new Uri(@"\\server\share\logfilerelease.txt")); buildProjectNode.Save(); detail.Information.Save(); detail.FinalizeStatus(BuildStatus.Failed); When running this code, it will a create a build that looks like this: As you can see, it created two configurations with error and warning information and a link to a log file. Just like a regular MSBuild would have done. This is very useful when using TFS 2010 Build in heterogeneous environments. It would also be possible to do this when running compilations completely outside TFS build, but then push the results of the into TFS for easy access. You can push all information, including the compilation summary, drop location, test results etc using the API.

    Read the article

  • How can I calculate power consumption of my PC in Watt?

    - by Jitendra vyas
    How can I calculate power consumption of my PC in Watt, to prove my House owner ( I live on rent) , my PC doesn't consume much power? He blames me for Huge power bills even he too use Fridge, A.C. etc and his son watch the TV all the time. We both share one Power meter so for bill we pay 50%-50% but He is saying I use PC all the time even night i keep on for downloading. I just want to calculate power consumption of my PC then will calculate monthly expense of unit as per my City's per unit price for power. I've Windows: Microsoft Windows XP Professional 5.1.2600 Service Pack 3 Memory (RAM): 960 MB CPU Info: AMD Sempron(tm) Processor 2500+ CPU Speed: 1399.0 MHz Sound card: Vinyl AC'97 Audio (WAVE) Display Adapters: VIA/S3G UniChrome Pro IGP | NetMeeting driver | RDPDD Chained DD Monitors: 1 - 17inch LCD - LG Screen Resolution: 1280 X 768 - 32 bit Network: Network Present Network Adapters: Bluetooth Device (Personal Area Network) #2 | WAN (PPP/SLIP) Interface CD / DVD Drives: I: ELBY CLONEDRIVE COM Ports: COM1 | COM2 | COM7 | COM8 | COM9 | COM10 LPT Ports: LPT1 Mouse: 3 Button Wheel Mouse Present Hard Disks: C: 29.3GB | D: 29.3GB | E: 97.7GB | F: 97.7GB | G: 211.9GB USB Controllers: 5 host controllers. Firewire (1394): 1 host controllers. Manufacturer: Phoenix Technologies, LTD Product Make: MS-7142 AC Power Status: OnLine BIOS Info: AT/AT COMPATIBLE | 01/18/06 | VIAK8M - 42302e31 Motherboard: MICRO-STAR INTERNATIONAL CO., LTD MS-7142 Modem: ZTE USB Modem FFFE CDMA #2

    Read the article

  • TFS 2010–Bridging the gap between developers and testers

    - by guybarrette
    Last fall, the Montreal .NET Community presented a full day on ALM with a session called “Bridging the gap between developers and testers”. It was a huge success. TFS experts Etienne Tremblay and Vincent Grondin presented again this session at the Ottawa user group in January and this time, the event was recorded by DevTeach in collaboration with Microsoft.  This 7 hours training is broken in 13 videos that you can watch online for free on the DevTeach Website.  If you’re interested in TFS, how to migrate from VSS, the TFS testing tools, how to set the TFS testing lab, how to test a UI and how to automate the tests, this is a must see series.   Here’s the segments list: Intro Migrating from VSS to TFS Automating the build Where’s our backlog? Adding a tester to the team Tester at work Bridging the gap Stop, we have a problem! Let’s get back on track Multi-environment testing Testing in the lab UI Automation Validating UI automation Look boss, no hands! http://www.devteach.com/ALM-TFS2010-Bridgingthegap.aspx var addthis_pub="guybarrette";

    Read the article

  • TechEd 2012: Day 3 &ndash; Morning TFS

    - by Tim Murphy
    My morning sessions for day three were dominated by Team Foundation Server.  This has been a hot topic for our clients lately, so this topic really stuck a chord. The speaker for the first session was from Boeing.  It was nice to hear how how a company mixes both agile and waterfall project management.   The approaches that he presented were very pragmatic.  For their needs reporting is the crucial part of their decision to use TFS.  This was interesting since this is probably the last aspect that most shops would think about. The challenge of getting users to adopt TFS was brought up by the audience.  As with the other discussion point he took a very level headed stance.  The approach he was prescribing was to eat the elephant a bite at a time instead of all at once.  If you try to convert you entire shop at once the culture shock will most likely kill the effort. Another key point he reminded us of is that you need to make sure that standards and compliance are taken into account when you setup TFS.  If you don’t implement a tool and processes around it that comply with the standards bodies that govern your business you are in for a world of hurt. Ultimately the reason they chose TFS was because it was the first tool that incorporated all the ALM features that they needed. Reduced licensing cost because of all the different tools they would need to buy to complete the same tasks.  They got to this point by doing an industry evaluation.  Although TFS came out on top he said that it still has a big gap is in the Java area.  Of course in this market there are vendors helping to close that gap. The second session was on how continuous feedback in agile is a new focus in VS2012.  The problems they intended to address included cycle time and average time to repair, root cause analysis. The speakers fired features at us as if they were firing a machine gun.  I will just say that I am looking forward to digging into the product after seeing this presentation.  Beyond that I will simply list some of the key features that caught my attention. Feature – Ability to link documents into tasks as artifacts Web access portal PowerPoint storyboards Exploratory testing Request feedback (allows users to record notes, screen shots and video/audio) See you after the second half. del.icio.us Tags: TechEd,TechEd 2012,TFS,Team Foundation Server

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • How do I make ESXi 5.0 to shutdown virtual machines when the physical power button is pushed?

    - by Pawel Sawicki
    I have a home NAS/DLNA server built out of an HP Micro Server with the HP branded VMware ESXi 5.0.0 build-623860 (free license) installed. Being a home media center I'd like it to be "manageable" by all my household members. This requires that it needs to be powered on an off (including all the VMs inside) by anybody with the physical access to the server by simply pressing the power button on the chassis. The "startup" part is easy to obtain - all I had to do was to configure the startup/shutdown policy: Once the server powers up, all VMs start as well and that's exactly what I need. Well.. it did work up until 5.0.0U1, but that's a different story: http://blogs.vmware.com/vsphere/2012/03/free-esxi-hypervisor-auto-start-breaks-with-50-update-1.html Unfortunately, pressing the power button doesn't gracefully shutdown the guest machines - they are terminated instead. If I run the "shut down" command from the vSphere Client interface guests are powered off. I'd like to get the same end result when the physical power button is switched. I've poked around a bit on the ESXi server. There's a "/sbin/shutdown.sh" script that seemed to do exactly what I need... but after trying it does exactly what the power off button. The "/etc/inittab" contains an entry for the "shutdown" level but I suppose it's not hooked to the power button. I can't find any acpi related configuration, neither do I know what exactly is executed when the power button is pressed. Does anybody have a clue how can I make the VMs shutdown automatically when the physical power switch is pressed to turn of the computer?

    Read the article

  • Get your TFS 2012 task board demo ready in under 1 minute

    - by Tarun Arora
    Release Notes – http://tfsdemosetup.codeplex.com/  | Download | Source Code | Report a Bug | Ideas In this blog post, I’ll show you how to use the ‘TfsDemoSetup’ application to configure and setup the TFS 2012 task board for a demo in well less than 1 minute Step 1 – Note what you get with a newly created Team Project Create a new Team Project on TFS Preview         2. Click Create Project         3. The project creation has completed        4. Open the team web access and have a look at the home page Note – Since I created the project I am the only Team Member       A default Team by the name AdventureWorks Team has been created       A few sprints have been assigned to the default team but no dates for sprint start and end have been specified        A default Area Path for the team is missing       Step 2. Download the TFS Demo Setup Console application from Codeplex 1. Navigate to the TFS Demo Setup project on codeplex https://tfsdemosetup.codeplex.com/       2. Download Instructions and TFSDemo_<version>      3. Follow the steps in the Instructions.txt file      4. Unzip TFSDemo_<version> and open the target folder. Two important files in this folder, DemoDictionary.xml – This file contains the settings using which the demo environment will be setup SetupTfsDemo.exe – This will run the TFS demo environment setup application       Step 3 – Configure the setup (i.e. team name, members, sprint dates, etc) 1. Open up DemoDictionary.xml      2. Walkthrough DemoDictionary.xml             a. Basic Team Details         <Name> – Specify the name of the team         <Description> – Specify a description to go with the team         <SetAsDefaultTeam> – This accepts a value “true/false” when set to true, the newly created team will be set as the default team in the project         <BacklogIterationPath> – Specify a backlog iteration path for the team     b. Iterations – The iterations you specify here will be set as the Teams iterations        <Iterations> – Accepts multiple <Iteration> nodes.        <Iteration> – This is the most granular level of an Iteration        <Path> – The path to the sprint, sample values, Release 1\Sprint 1 or Release 2\Sprint 2        <StartDate> – The sprint start date, this accepts the format yyyy-MM-dd        <FinishDate> – The sprint finish date, this accepts the format yyyy-MM-dd     c. Team Members – Team Members that need to be added to the newly created team will be added under this section         <TeamMembers> – Accepts multiple <TeamMember> nodes.         <TeamMember> – This is the most granular level of a Team Member         <User> – This accepts the username, if you are running this against TFSPreview then the live id of the user will need to be passed. If you are running this against TFS Server then the user id i.e. Domain\UserName will need to be passed          <Team> – Specify the name of the team that you want the user to be assigned to.     d. WorkItems – This section will allow you to add work items (product backlog Items and linked tasks) to the current sprint of the team         <WorkItems> – Accepts multiple <WorkItem> nodes.         <WorkItem> – Accepts one <ProductBacklogItem> and multiple <Task> nodes         <ProductBacklogItem> – Used to create a Product Backlog Item type work item               <Title> – The title of the Product Backlog Item               <Description> – The description of the Product Backlog Type Work Item               <AssignedTo> – Used to assign the work item to a team member. The team member name or email address can be passed.               <Effort> – The total effort required to complete the Product Backlog Item         <Task> – Used to create a linked task to the Product Backlog type work item               <Title> – The title of the task type work item               <Description> – The description of the Task Type Work Item               <AssignedTo> – Used to assign the work item to a team member. The team member name or email address can be passed.               <RemainingWork> – The remaining effort to complete the task type work item Step 4 – Setup the demo environment against the newly created Team Project 1. Run SetupTfsDemo.exe    2. Enter Y or y on the prompt to continue setting up TFS Demo setup.     3. Select the newly created Team project, for this blogpost I had created the Team Project – AdventureWorks, so that is what I’ll select in the Connect to TFS Server pop up    3. Click Connect and follow the messages that are written to the console application       Step 5 – Validate that the Demo environment is set up as per the configuration 1. The team web access is all lit up You have a Sprint, a burn down chart, team members…    2. The team Demo has been added and has been set up as the default team    3. The Sprint Backlog Iteration path, Sprints and Sprint start and finish dates have been set    4. The default area path has been setup    5. Taskboard – Backlog items view    6. Taskboard – Team members view      Step 6 – Exception Handling! 1. This solution has been tested against TFS 2012 Service/Server for the Scrum 2.1 process template. 2. You are likely to run into an exception if you mess up the config file 3. If the team already exists and you run the console app to set up the team (with the same name) you will run into exceptions. Please remember this is just an alpha release, if you have any feedback please leave a comment! Didn’t I say that it would just take 1 minute, Enjoy!

    Read the article

  • What to do after a servicing fails on TFS 2010

    - by Martin Hinshelwood
    What do you do if you run a couple of hotfixes against your TFS 2010 server and you start to see seem odd behaviour? A customer of mine encountered that very problem, but they could not just, or at least not easily, go back a version.   You see, around the time of the TFS 2010 launch this company decided to upgrade their entire 250+ development team from TFS 2008 to TFS 2010. They encountered a few problems, owing mainly to the size of their TFS deployment, and the way they were using TFS. They were not doing anything wrong, but when you have the largest deployment of TFS outside of Microsoft you tend to run into problems that most people will never encounter. We are talking half a terabyte of source control in TFS with over 80 proxy servers. Its certainly the largest deployment I have ever heard of. When they did their upgrade way back in April, they found two major flaws in the product that meant that they had to back out of the upgrade and wait for a couple of hotfixes. KB983504 – Hotfix KB983578 – Patch KB2401992 -Hotfix In the time since they got the hotfixes they have run 6 successful trial migrations, but we are not talking minutes or hours here. When you have 400+ GB of data it takes time to copy it around. It takes time to do the upgrade and it takes time to do a backup. Well, last week it was crunch time with their developers off for Christmas they had a window of opportunity to complete the upgrade. Now these guys are good, but they wanted Northwest Cadence to be available “just in case”. They did not expect any problems as they already had 6 successful trial upgrades. The problems surfaced around 20 hours in after the first set of hotfixes had been applied. The new Team Project Collection, the only thing of importance, had disappeared from the Team Foundation Server Administration console. The collection would not reattach either. It would not even list the new collection as attachable! Figure: We know there is a database there, but it does not This was a dire situation as 20+ hours to repeat would leave the customer over time with 250+ developers sitting around doing nothing. We tried everything, and then we stumbled upon the command of last resort. TFSConfig Recover /ConfigurationDB:SQLServer\InstanceName;TFS_ConfigurationDBName /CollectionDB:SQLServer\instanceName;"Collection Name" -http://msdn.microsoft.com/en-us/library/ff407077.aspx WARNING: Never run this command! Now this command does something a little nasty. It assumes that there really should not be anything wrong and sets about fixing it. It ignores any servicing levels in the Team Project Collection database and forcibly applies the latest version of the schema. I am sure you can imagine the types of problems this may cause when the schema is updated leaving the data behind. That said, as far as we could see this collection looked good, and we were even able to find and attach the team project collection to the Configuration database. Figure: After attaching the TPC it enters a servicing mode After reattaching the team project collection we found the message “Re-Attaching”. Well, fair enough that sounds like something that may need to happen, and after checking that there was disk IO we left it to it. 14+ hours later, it was still not done so the customer raised a priority support call with MSFT and an engineer helped them out. Figure: Everything looks good, it is just offline. Tip: Did you know that these logs are not represented in the ~/Logs/* folder until they are opened once? The engineer dug around a bit and listened to our situation. He knew that we had run the dreaded “tfsconfig restore”, but was not phased. Figure: This message looks suspiciously like the wrong servicing version As it turns out, the servicing version was slightly out of sync with the schema. KB Schema Successful           KB983504 341 Yes   KB983578 344 sort of   KB2401992 360 nope   Figure: KB, Schema table with notation to its success The Schema version above represents the final end of run version for that hotfix or patch. The only way forward The problem was that the version was somewhere between 341 and 344. This is not a nice place to be in and the engineer give us the  only way forward as the removal of the servicing number from the database so that the re-attach process would apply the latest schema. if his sounds a little like the “tfsconfig recover” command then you are exactly right. Figure: Sneakily changing that 3 to a 1 should do the trick Figure: Changing the status and dropping the version should do it Now that we have done that we should be able to safely reattach and enable the Team Project Collection. Figure: The TPC is now all attached and running You may think that this is the end of the story, but it is not. After a while of mulling and seeking expert advice we came to the opinion that the database was, for want of a better term, “hosed”. There could well be orphaned data in there and the likelihood that we would have problems later down the line is pretty high. We contacted the customer back and made them aware that in all likelihood the repaired database was more like a “cut and shut” than anything else, and at the first sign of trouble later down the line was likely to split in two. So with 40+ hours invested in getting this new database ready the customer threw it away and started again. What would you do? Would you take the “cut and shut” to production and hope for the best?

    Read the article

  • How To Change Attachment Size in WorkItems in TFS 2010

    - by Ravi
    Recently, I came across an issue where I had to change the size limit for WorkItem Attachments in TFS 2010. I searched all around the internet only to find very little information around it which wasn’t clear honestly. So after breaking my head for sometime, I was successful in doing it. Here are my conclusions and the procedure to do it. 1. You DON’T 'have to' programmatically change it. You can do it directly from IIS webservices. 2. You CAN change it programmatically too, by making an entry into TFS Registry using a small piece of code. Let me show you how it is done from IIS. This is to change the size of attachment to your required value for workItems in TFS 2010 for each collection individually. You must be a TFS Admin to do this ( Login with setup account ) Browse to /WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">http://localhost:8080/tfs/<YOUR-COLLECTION-NAME>/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx You’ll see 3 asmx services – GetMaxAttachmentSize, GetWorkItemTrackingVersion and SetMaxAttachmentSize. 4. To know what is the current value of the maximum attachment size for a collection, click on the first service and you’ll the current existing value for this particular collection when you click on ‘invoke’ button. ( value is in bytes ) 5. Now click on the ‘SetMaxAttachmentSize’ webservice and fill in the value of your choice. 6. Reset IIS ( not required honestly, but I did it, just to be sure ) 7. Now try attaching a file greater than the size you’ve set. It’ll fail successfully   Below is the error which you’d see in such scenarios. Let me know if you see any issues & I’ll be happy to help..!

    Read the article

  • Why do manufacturers not show all hardware power usage?

    - by Drew
    I find it slightly more difficult to build a computer when I do not know how much power is needed for a component. When selecting a power supply for a computer, it is difficult to know how large of one to get. You don't want to go too large for cost reasons and circuit reasons, but you don't want to go too low and not be able to properly use every component. For instance, a graphics card might say "Minimum of a 500 Watt power supply. (Minimum recommended power supply with +12 Volt current rating of 30 Amps.)" But it really needs 360W (12V * 30A). So why don't they just say "Uses 360W max and xxxW peak"? Processors, I have noticed are good at reporting their power usage, but aside from processors and sometimes graphics cards, power usage is easily found. What is the power consumed by the Blu-ray / DVD drives? By the HDDs/SSDs? By the Mobo? etc. Why are these questions not easily answered when building a machine?

    Read the article

  • Do you have to recreate workspaces after upgrading a TFS 2008 server to TFS 2010?

    - by Clara Oscura
    I am just reposting this thread from a MSDN forum since it seems to be unavailable. It was very useful when I was having trouble with my folder mappings after migrating to TFS 2010. Question: I opened VS2008 and connected it to the upgraded 2010 TFS server.  Upon clicking any of our Team Projects in source control explorer I get "Team Foundation Error - The workspace MYWORKSPACE;DOMAIN\MYUsername already exists on computer MYPCNAME." Answer: The same local paths on your machine are mapped to 2 different workspaces, one on the preupgrade server and one on the postupgrade server.  It's not safe to have multiple workspaces on different servers mapped to the same local paths b/c you could pend some changes while connected to one server, and the other server would have no idea what you did.  You should either delete your conflicting workspaces from one of the servers (if you don't need them on both), or test the new TFS instance from a new workspace (on different machine). If you want to test an existing production workspace on both servers, then yes, you will have to mess around with the workspace cache. You don’t have to delete the entire cache, you just need to run "tf workspaces /remove:* /server:<serverurl>" to clear the cached workspaces from a server (the command won't delete the workspaces), and possibly "tf workspaces /server:<server>" to refresh the workspace cache for a given server.  You will also have to do back up and restore the workspace before switching servers or your local files could be inconsistent. From the “Microsoft Visual Studio Team Foundation Server 2010 Beta 1” forum (not available anymore?) Technorati Tags: TFS 2010,TFS Workspaces,Team System,Team Foundation Server 2010

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >