Search Results

Search found 11957 results on 479 pages for 'martin day'.

Page 47/479 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • HP Ambient Light Sensor Adjustment

    - by Robin Day
    I have an HP nc4400 running Windows 7 64 bit. If I have the ambient light sensor enabled, it works well, but, its slightly too dim. I can turn off the light sensor and turn up the brightness manually and its more than bright enough. When I go to the brightness settings in Windows I can make the screen dimmer with the ambient light sensor enabled but cannot make it as bright as if it is disabled. So my question is, is it possible to keep the light sensor enabled but configure it so that the screen is brighter for given "ambient light". At the moment I have to turn it off whenever I'm in the office or outside in sunlight as I need the screen as bright as possible and it seems no matter how light it is, it never goes to full brightness when it's enabled.

    Read the article

  • Is there any research about daily differences in productivity by the same programmer?

    - by Rice Flour Cookies
    There has been a flurry of activity on the internet discussing a huge difference between the productivity of the best programmers versus the productivity of the worst. Here's a typical Google result when researching this topic: http://www.devtopics.com/programmer-productivity-the-tenfinity-factor/ I've been wondering if there has been any research or serious discussion about differences in day-to-day productivity by the same programmer. I think that personally, there is a huge variance in how much I can get done on a day by day basis, so I was wondering if anyone else feels the same way or has done any research.

    Read the article

  • HP/Lenovo alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/Lenovo? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • Is committing/checking in code everyday a good practice?

    - by ArtB
    I've been reading Martin Fowler's note on Continuous Integration and he lists as a must "Everyone Commits To the Mainline Every Day". I do not like to commit code unless the section I'm working on is complete and that in practice I commit my code every three days: one day to investigate/reproduce the task and make some preliminary changes, a second day to complete the changes, and a third day to write the tests and clean it up^ for submission. I would not feel comfortable submitting the code sooner. Now, I pull changes from the repository and integrate them locally usually twice a day, but I do not commit that often unless I can carve out a smaller piece of work. Question: is committing everyday such a good practice that I should change my workflow to accomodate it, or it is not that advisable? Edit: I guess I should have clarified that I meant "commit" in the CVS meaning of it (aka "push") since that is likely what Fowler would have meant in 2006 when he wrote this. ^ The order is more arbitrary and depends on the task, my point was to illustrate the time span and activities, not the exact sequence.

    Read the article

  • Should I charge for travel time as a contractor?

    - by Keith
    Here's a question for all my fellow contractors - I'm paid quite handsomely for my normal contracted hours (any overtime is billed at the same rate) but do you think it fair to bill for travel time to the other end of the country (regional office) when this takes place outside of the normal working day (or overlapping into the evening) as well as actual time holed up in a hotel room when you get there, ready for a normal working day the next day, along with the return journey? Petrol is claimed normally (nominal rate) and hotel is covered by the company I contract for.

    Read the article

  • Scheduling VMWare ESXi 4.1 VM Restart

    - by Robin Day
    We had a virtual machine running on a VMWare Server host on Windows Server 2003. The machine is set up with non persistent disks. We had a windows task schedule set up that ran a batch file to reset the machine each week so that it returned to it's original state. The batch file that we had running was: "C:\Program Files\VMware\VMware Server\vmware-cmd" "C:\Virtual Machines\VirtualMachineName\VirtualMachineName.vmx" stop hard "C:\Program Files\VMware\VMware Server\vmware-cmd" "C:\Virtual Machines\VirtualMachineName\VirtualMachineName1.vmx" start We have since migrated this machine to the free version of ESXi 4.1. Can anyone let me know if and how it is possible to schedule such a restart?

    Read the article

  • Using VLC to Unicast High Definition Webcam over local gigabit LAN with low/zero delay

    - by Robin Day
    We're setting up a webcam "window" between two offices in the same buildilng. The two PC's are connected to the same gigabit switch. We're using VLC to stream the webcam over HTTP using the following commands. vlc dshow:// :dshow-caching="0" :dshow-size="640x480" :sout=#transcode{vcodec=h264,vb=0,scale=0}:http{mux=ffmpeg{mux=flv},dst=:8080/} :no-sout-rtp-sap :no-sout-standard-sap :ttl=1 :sout-keep vlc http://192.168.0.1:8080 :http-caching="0" Even with the caching set to zero, the delay in the image is a good 2-3 seconds. The CPU usage of each pc is also maxed. I'm guessing it's the transcoding that's causing much of the delay. Can anyone give me some changes to these command lines that will reduce the transcoding power, or send the webcam over a different protocol, or anything that will reduce the delay of the cameras? Bandwidth is not an issue at all as the pc's can be connected to a dedicated switch/vlan if required.

    Read the article

  • HP/IBM alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/IBM? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • Drowning in documents - recommend doc management solutions?

    - by Martin Day
    I've been researching document management lately. I want to organise my docs at home and also at the office. Finding affordable solutions one can actually test drive is quite hard. Some that I've downloaded just don't seem to work (testing on brand new Vista PC). I've seen some software on Amazon like Paperport but not really sure what they're like. For home I'd like something to organise files, full text search, good scanner integration, nice interface etc. But for the office it seems harder. I need something that does proper workflow and keeps versions. It will have an audit trail. Documents can be approved, checked in/out etc. I know a few clients who would like something similar. It would be great just to import thousands of documents from a shared drive and get them indexed with dupes killed. I'd like to be super clear about how/where the documents are being stored so that maintenance and backups are clear. My Google/twitter searches lead back to the same tired and vague webpages pushing what look like expensive and custom made solutions. Some might be very good I suppose but it's darn hard to tell. I don't mind a hosted package but all in all I don't think something like Google Docs, as good as it is now, will work. There are too many quirks and missing features (as compared to Office). Being able to work directly with the common Office file formats is important. I've noted a similar sounding question asked here back in August but it didn't seem to turn up too many solutions that I could easily and quickly apply. Also there could have been some changes since then so I feel it's worth asking.

    Read the article

  • Should I charge for travel expenses as a contractor?

    - by Keith
    Here's a question for all my fellow contractors - I'm paid quite handsomely for my normal contracted hours (any overtime is billed at the same rate) but do you think it fair to bill for travel time to the other end of the country (regional office) when this takes place outside of the normal working day (or overlapping into the evening) as well as actual time holed up in a hotel room when you get there, ready for a normal working day the next day, along with the return journey? Petrol is claimed normally (nominal rate) and hotel is covered by the company I contract for.

    Read the article

  • How to convert excel individual cell values to percentage change values over time

    - by cgalloway
    I have two years of excel data showing daily share prices of a particular stock. I want to change those values to show percentage change (on a daily basis) from the zero date (ie the first day of the two year period). I know that the formula for showing daily percentage change would be (second day/first day -1) and that I can click and drag on that formula to extend over the rest of the two-year time period. The formula I want would be, basically, (each day/first day-1). Is there an easy way to automate the script so I dont have to type it out 730 times?

    Read the article

  • How can I convert this C Calendaer Code into a Objective-C syntax and have it work with matrixes

    - by Alec Niemy
    #define TRUE 1 #define FALSE 0 int days_in_month[]={0,31,28,31,30,31,30,31,31,30,31,30,31}; char *months[]= { " ", "\n\n\nJanuary", "\n\n\nFebruary", "\n\n\nMarch", "\n\n\nApril", "\n\n\nMay", "\n\n\nJune", "\n\n\nJuly", "\n\n\nAugust", "\n\n\nSeptember", "\n\n\nOctober", "\n\n\nNovember", "\n\n\nDecember" }; int inputyear(void) { int year; printf("Please enter a year (example: 1999) : "); scanf("%d", &year); return year; } int determinedaycode(int year) { int daycode; int d1, d2, d3; d1 = (year - 1.)/ 4.0; d2 = (year - 1.)/ 100.; d3 = (year - 1.)/ 400.; daycode = (year + d1 - d2 + d3) %7; return daycode; } int determineleapyear(int year) { if(year% 4 == FALSE && year%100 != FALSE || year%400 == FALSE) { days_in_month[2] = 29; return TRUE; } else { days_in_month[2] = 28; return FALSE; } } void calendar(int year, int daycode) { int month, day; for ( month = 1; month <= 12; month++ ) { printf("%s", months[month]); printf("\n\nSun Mon Tue Wed Thu Fri Sat\n" ); // Correct the position for the first date for ( day = 1; day <= 1 + daycode * 5; day++ ) { printf(" "); } // Print all the dates for one month for ( day = 1; day <= days_in_month[month]; day++ ) { printf("%2d", day ); // Is day before Sat? Else start next line Sun. if ( ( day + daycode ) % 7 > 0 ) printf(" " ); else printf("\n " ); } // Set position for next month daycode = ( daycode + days_in_month[month] ) % 7; } } int main(void) { int year, daycode, leapyear; year = inputyear(); daycode = determinedaycode(year); determineleapyear(year); calendar(year, daycode); printf("\n"); } This code generates a calendar of the inputed year in the terminal. my question is how can i convert this into a Objective-C syntax instead of this C syntax. im sure this is simple process but im quite of a novice to objective - c and i need it for a cocoa project. this code outputs the calendar as a continuously series of strings until the last month hits. soo instead of creating the calendar in the terminal how can i input the calendar a series of NSMatrixes depend on the inputed year. hope somone can help me with this thanks or every helps (you be in the credits of the finished program) :) (the calendar is just a small part of the program i making and it is one of the important parts!!)

    Read the article

  • How can I make this work with deep properties

    - by Martin Robins
    Given the following code... class Program { static void Main(string[] args) { Foo foo = new Foo { Bar = new Bar { Name = "Martin" }, Name = "Martin" }; DoLambdaStuff(foo, f => f.Name); DoLambdaStuff(foo, f => f.Bar.Name); } static void DoLambdaStuff<TObject, TValue>(TObject obj, Expression<Func<TObject, TValue>> expression) { // Set up and test "getter"... Func<TObject, TValue> getValue = expression.Compile(); TValue stuff = getValue(obj); // Set up and test "setter"... ParameterExpression objectParameterExpression = Expression.Parameter(typeof(TObject)), valueParameterExpression = Expression.Parameter(typeof(TValue)); Expression<Action<TObject, TValue>> setValueExpression = Expression.Lambda<Action<TObject, TValue>>( Expression.Block( Expression.Assign(Expression.Property(objectParameterExpression, ((MemberExpression)expression.Body).Member.Name), valueParameterExpression) ), objectParameterExpression, valueParameterExpression ); Action<TObject, TValue> setValue = setValueExpression.Compile(); setValue(obj, stuff); } } class Foo { public Bar Bar { get; set; } public string Name { get; set; } } class Bar { public string Name { get; set; } } The call to DoLambdaStuff(foo, f => f.Name) works ok because I am accessing a shallow property, however the call to DoLambdaStuff(foo, f => f.Bar.Name) fails - although the creation of the getValue function works fine, the creation of the setValueExpression fails because I am attempting to access a deep property of the object. Can anybody please help me to modify this so that I can create the setValueExpression for deep properties as well as shallow? Thanks.

    Read the article

  • HTML Calendar form and input arrays

    - by Christopher Ickes
    Hello. Looking for the best practice here... Have a form that consists of a calendar. Each day of the calendar has 2 text input fields - customer and check-in. What would be the best & most efficient way to send this form to PHP for processing? <form action="post"> <div class="day"> Day 1<br /> <label for="customer['.$current['date'].']">Customer</label> <input type="text" name="customer['.$current['date'].']" value="" size="20" /> <label for="check-in['.$current['date'].']">Check-In</label> <input type="text" name="check-in['.$current['date'].']" value="" size="20" /> <input type="submit" name="submit" value="Update" /> </day> <div class="day"> Day 2<br /> <label for="customer['.$current['date'].']">Customer</label> <input type="text" name="customer['.$current['date'].']" value="" size="20" /> <label for="check-in['.$current['date'].']">Check-In</label> <input type="text" name="check-in['.$current['date'].']" value="" size="20" /> <input type="submit" name="submit" value="Update" /> </day> </form> Is my current setup good? I feel there has to be a better option. My concern involves processing a whole year at once (which can happen) and adding additional text input fields.

    Read the article

  • Optimize master-detail insert statements

    - by Dave Jarvis
    Quest After a day of running (against nearly 1 GB of data), a set of statements are tumbling down to 40 inserts per second. I am looking to increase that by an order of magnitude or two. SQL Code The code to insert the information comes in two parts: a master record and detail records. The master record: INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); The detail records: INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, 'T', 3); Proposed Solution INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); SET @month_ref_id := (SELECT LAST_INSERT_ID()); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, 'T', 3); Constraints The MONTH_REF table has an AUTO_INCREMENT primary key and is indexed on it. The DAILY table has no index and no primary key. A primary key can be added to the DAILY table, if it would help. Question Is there a more efficient way to execute the (billion or so) insert statements than the proposed solution? Thank you!

    Read the article

  • How to let MSN or Yahoo Messenger set you to be Invisible or Offline when you are idle for an hour?

    - by Jian Lin
    The short question is: How do we let MSN or Yahoo Messenger set us to be Invisible or Offline when we are idle for half an hour or an hour? The reason is: if I am on 24 hours a day, some people see me as weird. Some people see my value as low, because I am always there. There are ways to set me to "Away" or "Busy" after 10 minutes, but there seems to be no way to set myself to invisible or offline after 1 hour. As I am a software developer, I am very used to turning the computer on 24 hours a day. (for example, for checking email for urgent fixes, and fix issue and push to web server). We don't turn off computer usually even when we sleep, because we may sometimes can't sleep yet and come check on the computer, or wake up in the morning and immediately need to check if everything is ok. But, my MSN and Yahoo Messenger is always on for 24 hours a day, and I found that some girls start to ask me why I am always there 24 hours a day (even though they see me as away or busy, their feeling is that I am always there). What's more, I found that since I am always there, my value actually drop in their eyes, because hard to get = high value, and always there = low value. Some people feel me as having nothing much to do, always in front of computer, or what is he doing in front of computer so much? Now since it is my job, and I need to read emails once in a while, I am in fact in front of the computer more than some other people. I am in front of the computer maybe 10 hours a day, far from 24 hours a day. Is there an easy and automatic solution to this?

    Read the article

  • Upgrading Visual Studio 2010

    - by Martin Hinshelwood
    I have been running Visual Studio 2010 as my main development studio on my development computer since the RC was released. I need to upgrade that to the RTM, but first I need to remove it. Microsoft have done a lot of work to make this easy, and it works. Its as easy as uninstalling from the control panel. I have had may previous versions of Visual Studio 2010 on this same computer with no need to rebuild to remove all the bits. Figure: Run the uninstall from the control panel to remove Visual Studio 2010 RC Figure: The uninstall removes everything for you.  Figure: A green tick means the everything went OK. If you get a red cross, try installing the RTM anyway and it should warn you with what was not uninstalled properly and you can remove it manually.   Once you have VS2010 RC uninstalled installing should be a breeze. The install for 2010 is much faster than 2008. Which could take all day, and then some on slower computers. This takes around 20 minutes even on my small laptop. I always do a full install as although I have to do c# I sometimes get to use a proper programming language VB.NET. Seriously, there is nothing worse than trying to open a project and the other developer has used something you don't have. Its not their fault. Its yours! Save yourself the angst and install Fully, its only 5.9GB. Figure: I always select all of the options.   Now go forth and develop! Preferably in VB.NET…   Technorati Tags: Visual Studio,VS2010,VS 2010

    Read the article

  • SSW Scrum Rule: Do you know to use clear task descriptions?

    - by Martin Hinshelwood
    When you create tasks in Scrum you are doing this within a time box and you tend to add only the information you need to remember what the task is. And the entire Team was at the meeting and were involved in the discussions around the task, so why do you need more? Once you have accepted a task you should then add as much information as possible so that anyone can pick up that task; what if your numbers come up? Will you be into work the next day? Figure: What if your numbers come up in the lottery? What if the Team runs a syndicate and all your numbers come up? The point is that anything can happen and you need to protect the integrity of the project, the company and the Customer. Add as much information to the task as you think is necessary for anyone to work on the task. If you need to add rich text and images you can do this by attaching an email to the task.   Figure: Bad example, there is not enough information for a non team member to complete this task Figure: Julie provided a lot more information and another team should be able to pick this up. This has been published as Do you know to ensure that relevant emails are attached to tasks in our Rules to Better Scrum using TFS.   Technorati Tags: Scrum,SSW Rules,TFS 2010

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Do you know the minimum builds to create on any branch?

    - by Martin Hinshelwood
    You should always have three builds on your team project. These should be setup and tested using an empty solution before you write any code at all. Figure: Three builds named in the format [TeamProject].[AreaPath]_[Branch].[Gate|CI|Nightly] for every branch.   These builds should use the same XAML build workflow; however you may set them up to run a different set of tests depending on the time it takes to run a full build. Gate – Only needs to run the smallest set of tests, but should run most if not all of the Unit Test. This is run before developers are allowed to check-in CI – This should run all Unit Tests and all of the automated UI tests. It is run after a developer check-in. Nightly – The Nightly build should run all of the Unit Tests, all of the Automated UI tests and all of the Load and Performance tests. The nightly build is time consuming and will run but once a night. Packaging of your Product for testing the next day may be done at this stage as well. Figure: You can control what tests are run and what data is collected while they are running. Note: We do not run all the tests every time because of the time consuming nature of running some tests, but ALL tests should be run overnight. Note: If you had a really large project with thousands of tests including long running Load tests you may need to add a Weekly build to the mix.     Figure: Bad example, you can’t tell what these builds do if they are in a larger list   Figure: Good example, you know exactly what project, branch and type of build these are for.   Technorati Tags: SSW,SSW Rules,VS2010,VS ALM,Team Build 2010,Team Build

    Read the article

  • My linux server time and log files are not the same

    - by Martin
    Hi i have installed NTP on my linux server and i am getting my clock from a 6500 core switch, everything is working fine. When i ssh to a switch i have it all sent to a log file on the linux server, this log file does not time stamp with the same time as i have on the server. the date and hwclock are the same. But my my is exatly 6 hours behind my date on the server wich is CET. Has annyone have that same problem ? bedst regards Martin

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >