Search Results

Search found 1979 results on 80 pages for 'martin ch'.

Page 19/80 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Combining Scrum, TFS2010 and Email to keep everyone in the loop

    - by Martin Hinshelwood
    Often you will receive rich information from your Product Owner (Customer) about tasks. That information can be in the form of Word documents, HTML Emails and Pictures, but you generally receive them in the context of an Email. You need to keep these so your Team can refer to it later, and so you can send a “done” when the task has been completed. This preserves the “history” of the task and allows you to keep relevant partied included in any future conversation. At SSW we keep the original email so that we can reply Done and delete the email. But keeping it in your email does not help other members of the team if they complete the task and need to send the “done”. Worse yet, the description field in Team Foundation Server 2010 (TFS 2010) does not support HTML and images, nor does the default task template support an “interested parties” or CC field. You can attach this content manually, but it can be time consuming. Figure: Description only supports plain text, and History supports HTML with no images   What should we do? At SSW we always follow the rules, and it just so happened that we have rules to both achieve this, and to make it easier. You should follow the existing Rules to Better Project Management  and attach the email to your task so you can refer to and reply to it later when you close the task: Do you know what Outlook add-ins you need? Describe the work item request in an email Use Outlook Add-in to move the email to a TFS Work Item When replying to an email with “done” you should follow: Do you update Team Companion template, so the email "subject" doesn't change? Do you update Team Companion template, so you can generate a proper "done" mail? Following these simple rules will help your Product Owner understand you better, and allow your team to more effectively collaborate with each other. An added bonus is that as we are keeping the email history in sync with TFS. When you “reply all” to the email all of the interested partied to the Task are also included. This notified those that may have been blocked by your task to keep up to date with its status. This has been published as Do you know to ensure that relevant emails are attached to tasks in our Rules to Better Scrum using TFS. What could we do better? I would like to see this process automated so that we capture the information correctly in the task without the need to use email. This would require a change to the process template in Team Foundation Server to add an “Interested Parties” field. Each reply to the email would need to be automatically processed into a Work Item. This could be done by adding a task identifier as the first item in the “Relates to” email header, and copying in an email address that you watch. This would then parse out the relevant information and add the new message to the history, update the “Interested parties” field and attach the Images. Upon reflection, it may even be possible, but more difficult to do this using ONLY the History field and including some of the header information in there to the build a done email with history. This would not currently deal with email “forks” well, but I think it would be adequate for our needs. It would be nice if we could find time to implement this, but currently it is but a pipe dream. Maybe Microsoft could implement something in the next version of Team Foundation Server, and in the mean time we have a process that works well. Technorati Tags: Scrum,SSW Rules,TFS 2010,TFS 2008

    Read the article

  • Content Encryption Options in Oracle IRM 11g

    - by martin.abrahams
    Another of the innovations in Oracle IRM 11g is a wider choice of encryption algorithms for protecting content. The choice is now as illustrated below. As you see, three of the choices are marked as FIPS options, where FIPS refers to the Federal Information Processing Standard Publication 140-2, a U.S. government security standard for accreditation of cryptographic modules.

    Read the article

  • Granular Clipboard Control in Oracle IRM

    - by martin.abrahams
    One of the main leak prevention controls that customers are looking for is clipboard control. After all, there is little point in controlling access to a document if authorised users can simply make unprotected copies by use of the cut and paste mechanism. Oddly, for such a fundamental requirement, many solutions only offer very simplistic clipboard control - and require the customer to make an awkward choice between usability and security. In many cases, clipboard control is simply an ON-OFF option. By turning the clipboard OFF, you disable one of the most valuable edit functions known to man. Try working for any length of time without copying and pasting, and you'll soon appreciate how valuable that function is. Worse, some solutions disable the clipboard completely - not just for the protected document but for all of the various applications you have open at the time. Normal service is only resumed when you close the protected document. In this way, policy enforcement bleeds out of the particular assets you need to protect and interferes with the entire user experience. On the other hand, turning the clipboard ON satisfies a fundamental usability requirement - but also makes it really easy for users to create unprotected copies of sensitive information, maliciously or otherwise. All they need to do is paste into another document. If creating unprotected copies is this simple, you have to question how much you are really gaining by applying protection at all. You may not be allowed to edit, forward, or print the protected asset, but all you need to do is create a copy and work with that instead. And that activity would not be tracked in any way. So, a simple ON-OFF control creates a real tension between usability and security. If you are only using IRM on a small scale, perhaps security can outweigh usability - the business can put up with the restriction if it only applies to a handful of important documents. But try extending protection to large numbers of documents and large user communities, and the restriction rapidly becomes really unwelcome. I am aware of one solution that takes a different tack. Rather than disable the clipboard, pasting is always permitted, but protection is automatically applied to any document that you paste into. At first glance, this sounds great - protection travels with the content. However, at any scale this model may not be so appealing once you've had to deal with support calls from users who have accidentally applied protection to documents that really don't need it - which would be all too easily done. This may help control leakage, but it also pollutes the system with documents that have policies applied with no obvious rhyme or reason, and it can seriously inconvenience the business by making non-sensitive documents difficult to access. And what policy applies if you paste some protected content into an already protected document? Which policy applies? There are no prizes for guessing that Oracle IRM takes a rather different approach. The Oracle IRM Approach Oracle IRM offers a spectrum of clipboard controls between the extremes of ON and OFF, and it leverages the classification-based rights model to give granular control that satisfies both security and usability needs. Firstly, we take it for granted that if you have EDIT rights, of course you can use the clipboard within a given document. Why would we force you to retype a piece of content that you want to move from HERE... to HERE...? If the pasted content remains in the same document, it is equally well protected whether it be at the beginning, middle, or end - or all three. So, the first point is that Oracle IRM always enables the clipboard if you have the right to edit the file. Secondly, whether we enable or disable the clipboard, we only affect the protected document. That is, you can continue to use the clipboard in the usual way for unprotected documents and applications regardless of whether the clipboard is enabled or disabled for the protected document(s). And if you have multiple protected documents open, each may have the clipboard enabled or disabled independently, according to whether you have Edit rights for each. So, even for the simplest cases - the ON-OFF cases - Oracle IRM adds value by containing the effect to the protected documents rather than to the whole desktop environment. Now to the granular options between ON and OFF. Thanks to our classification model, we can define rights that enable pasting between documents in the same classification - ie. between documents that are protected by the same policy. So, if you are working on this month's financial report and you want to pull some data from last month's report, you can simply cut and paste between the two documents. The two documents are classified the same way, subject to the same policy, so the content is equally safe in both documents. However, if you try to paste the same data into an unprotected document or a document in a different classification, you can be prevented. Thus, the control balances legitimate user requirements to allow pasting with legitimate information security concerns to keep data protected. We can take this further. You may have the right to paste between related classifications of document. So, the CFO might want to copy some financial data into a board document, where the two documents are sealed to different classifications. The CFO's rights may well allow this, as it is a reasonable thing for a CFO to want to do. But policy might prevent the CFO from copying the same data into a classification that is accessible to external parties. The above option, to copy between classifications, may be for specific classifications or open-ended. That is, your rights might enable you to go from A to B but not to C, or you might be allowed to paste to any classification subject to your EDIT rights. As for so many features of Oracle IRM, our classification-based rights model makes this type of granular control really easy to manage - you simply define that pasting is permitted between classifications A and B, but omit C. Or you might define that pasting is permitted between all classifications, but not to unprotected locations. The classification model enables millions of documents to be controlled by a few such rules. Finally, you MIGHT have the option to paste anywhere - such that unprotected copies may be created. This is rare, but a legitimate configuration for some users, some use cases, and some classifications - but not something that you have to permit simply because the alternative is too restrictive. As always, these rights are defined in user roles - so different users are subject to different clipboard controls as required in different classifications. So, where most solutions offer just two clipboard options - ON-OFF or ON-but-encrypt-everything-you-touch - Oracle IRM offers real granularity that leverages our classification model. Indeed, I believe it is the lack of a classification model that makes such granularity impractical for other IRM solutions, because the matrix of rules for controlling pasting would be impossible to manage - there are so many documents to consider, and more are being created all the time.

    Read the article

  • What are the options for simple Ajax calls for a Java webapp?

    - by Cedric Martin
    I've got a very simple need and I don't know what are the options available. If I simplify, users see webpage like this server by a Java webapp server: [-] red [x] green [-] blue [-] yellow The selected color is green And then I want the user to be able to select the yellow color and have the part of the page containing the relevant text change to: [-] red [-] green [-] blue [x] yellow The selected color is yellow Basically I want something a bit more user friendly than simply using HTTP GET all the time. There shall be a lot of options the user can select from and this shall affect an (HTML formatted) text displayed on the page. And I want the user to see his change as soon as possible, without having the page to fully reload and without being redirected to another page. There shall be a client/server round-trip (the information to display depending on the options selected ain't available on the client-side so I cannot do it all in JavaScript in the browser). I'd like to use Ajax requests but I don't know which way to go: jQuery GWT something else What are my options and what would be the pros and cons of the various approach? P.S: I'm very familiar with Java (SCP since the last century and basically being a Java programmer for the last 12 years or so) but not familiar at all with JavaScript (though I did hack a few Ajaxy-calls years ago, way before great libraries existed).

    Read the article

  • What micro web-framework has the lowest overhead but includes templating

    - by Simon Martin
    I want to rewrite a simple small (10 page) website and besides a contact form it could be written in pure html. It is currently built with classic asp and Dreamweaver templates. The reason I'm not simply writing 10 html pages is that I want to keep the layout all in 1 place so would need either includes or a masterpage. I don't want to use Dreamweaver templates, or batch processing (like org-mode) because I want to be able to edit using notepad (or Visual Studio) because occasionally I might need to edit a file on the server (Go Daddy's IIS admin interface will let me edit text). I don't want to use ASP.NET MVC or WebForms (which I use in my day job) because I don't need all the overhead they bring with them when essentially I'm serving up 9 static files, 1 contact form and 1 list of clubs (that I aim to use jQuery to filter). The shared hosting package I have on Go Daddy seems to take a long time to spin up when serving aspx files. Currently the clubs page is driven from an MS SQL database that I try to keep up to date by manually checking the dojo locator on the main HQ pages and editing the entries myself, this is again way over the top. I aim to get a text file with the club details (probably in JSON or xml format) and use that as the source for the clubs page. There will need to be a bit of programming for this as the HQ site is unable to provide an extract / feed so something will have to scrape the site periodically to update my clubs persistence file. I'd like that to be automated - but I'm happy to have that triggered on a visit to the clubs page so I don't need to worry about scheduling a job. I would probably have a separate process that updates the persistence that has nothing to do with the rest of the site. Ideally I'd like to use Mercurial (or git) to publish, I know Bitbucket (and github) both serve static page sites so they wouldn't work in this scenario (dynamic pages and a contact form) but that's the model I'd like to use if there is such a thing. My requirements are: Simple templating system, 1 place to define header, footers, menu etc., that can be edited using just notepad. Very minimal / lightweight framework. I don't need a monster for 10 pages Must run either on IIS7 (shared Go Daddy Windows hosting) or other free host

    Read the article

  • Setting Up IRM Test Content

    - by martin.abrahams
    A feature of the 11g IRM Server that sometimes gets overlooked is the ability to set up some test content that any IRM user can access to verify that their IRM Desktop can reach the server, authenticate successfully, and render protected content successfully. Such test content is useful for new users, and in troubleshooting scenarios. Here's how to set up some test content... In the management console, go to IRM - Administration - Test Content, as shown. The console will display a list of test content - initially an empty list. Use the Add option to specify the URL of a document or image, and define one or more labels for the test content in whichever languages your users favour. Note that you do not need to seal the image or document in order to use it as test content. Nor do you need to set up any rights for the test content. The IRM Server will handle the sealing and rights assignment automatically such that all authenticated users are authorised to view the test content. Repeat this process for as many different types of content as you would like to offer for test purposes - perhaps a Word document, a PDF document, and an image. To keep things simple the first time I did this, I used the URL of one of the images in the IRM Server's UI - so there was no problem with the IRM Server being able to reach that image. Whatever content you want to use, the IRM Server needs to be able to reach it at the URL you specify. Using Test Content Open a browser and browse to the URL that the IRM Desktop normally uses to access the IRM Server, for example: http://irm11g.oracle.com/irm_desktop If you are not sure, you can find this URL in the Servers tab of the IRM Options dialog. Go to the Test tab, and you will see your test content listed. By opening one of the items, you can verify that your IRM Desktop is healthy and that you can authenticate to the IRM Server.

    Read the article

  • Ghost team foundation build controllers

    - by Martin Hinshelwood
    Quite often after an upgrade there are things left over. Most of the time they are easy to delete, but sometimes it takes a little effort. Even rarer are those times when something just will not go away no matter how much you try. We have had a ghost team build controller hanging around for a while now, and it had defeated my best efforts to get rid of it. The build controller was from our old TFS server from before our TFS 2010 beta 2 upgrade and was really starting to annoy me. Every time I try to delete it I get the message: Controller cannot be deleted because there are build in progress -Manage Build Controller dialog   Figure: Deleting a ghost controller does not always work. I ended up checking all of our 172 Team Projects for the build that was queued, but did not find anything. Jim Lamb pointed me to the “tbl_BuildQueue” table in the team Project Collection database and sure enough there was the nasty little beggar. Figure: The ghost build was easily spotted Adam Cogan asked me: “Why did you suspect this one?” Well, there are a number of things that led me to suspect it: QueueId is very low: Look at the other items, they are in the thousands not single digits ControllerId: I know there is only one legitimate controller, and I am assuming that 6 relates to “zzUnicorn” DefinitionId: This is a very low number and I looked it up in “tbl_BuildDefinition” and it did not exist QueueTime: As we did not upgrade to TFS 2010 until late 2009 a date of 2008 for a queued build is very suspect Status: A status of 2 means that it is still queued This build must have been queued long ago when we were using TFS 2008, probably a beta, and it never got cleaned up. As controllers are new in TFS 2010 it would have created the “zzUnicorn” controller to handle any build servers that already exist. I had previously deleted the Agent, but leaving the controller just looks untidy. Now that the ghost build has been identified there are two options: Delete the row I would not recommend ever deleting anything from the database to achieve something in TFS. It is really not supported. Set the Status to cancelled (Recommended) This is the best option as TFS will then clean it up itself So I set the Status of this build to 2 (cancelled) and sure enough it disappeared after a couple of minutes and I was then able to then delete the “zzUnicorn” controller. Figure: Almost completely clean Now all I have to do is get rid of that untidy “zzBunyip” agent, but that will require rewriting one of our build scripts which will have to wait for now.   Technorati Tags: ALM,TFBS,TFS 2010

    Read the article

  • Hancon / Hanwang Graphics Tablet not recognised

    - by Martin Kyle
    I'm totally lost. I've just built a new system and installed Ubuntu 12.04. It's my first time with Linux and getting into the terminal / command line for the first time since IBMDOS 5 and Windows 3.1 has been a steep learning curve. However, the interface works beautifully apart from it doesn't recognize my Hanvon Artmaster AM1209. I have sent diagnostics to Digimend and Favux was kind enough to advise that the tablet should be using the Wacom X driver as the Hanvon is actually a Hanwang and these should be supported. lsusb reports: ID 0b57:8501 Beijing HanwangTechnology Co., Ltd xinput list reports: ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2+USB Mouse id=8 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Power Button id=7 [slave keyboard (3)] ? Eee PC WMI hotkeys id=9 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=10 [slave keyboard (3)] Favux suggested inspecting /var/log/Xorg.0.log for the tablet but I cannot see any mention of it, and that is as far as I have got. I've tried researching the problem but I am struggling with all the new terminology and the fact that I want the PC to be a means to an end and not the end in itself where I spend the rest of my days tweaking and testing rather than just using it. Hope there is some help out there.

    Read the article

  • Display a JSON-string as a table

    - by Martin Aleksander
    I'm totally new to JSON, and have a json-string I need to display as a user-friendly table. I have this file, http://ish.tek.no/json_top_content.php?project_id=11&period=week, witch is showing ID-numbers for products (title) and the number of views. The Title-ID should be connected to this file; http://api.prisguide.no/export/product.php?id=158200 so I can get a table like this: ID | Product Name | Views 158200 | Samsung Galaxy SIII | 21049 How can I do this?

    Read the article

  • Are project managers useful in Scrum?

    - by Martin Wickman
    There are three roles defined in Scrum: Team, Product Owner and Scrum Master. There is no project manager, instead the project manager job is spread across the three roles. For instance: The Scrum Master: Responsible for the process. Removes impediments. The Product Owner: Manages and prioritizes the list of work to be done to maximize ROI. Represents all interested parties (customers, stakeholders). The Team: Self manage its work by estimating and distributing it among themselves. Responsible for meeting their own commitments. So in Scrum, there is no longer a single person responsible for project success. There is no command-and-control structure in place. That seems to baffle a lot of people, specifically those not used to agile methods, and of course, PM's. I'm really interested in this and what your experiences are, as I think this is one of the things that can make or break a Scrum implementation. Do you agree with Scrum that a project manager is not needed? Do you think such a role is still required? Why?

    Read the article

  • Windows expand over 2 monitors in quad-monitor setup

    - by Martin
    i just installed ubuntu 11.10 with my previous hardware setup: 4 monitors and 2 identical nvidia graphic cards. draging windows around all 4 monitors works nice, but when i maximize a window it expand always over 2 screens. (2x twinview). i had an workaround for this in 11.04 but cant remember what it was... may one of you guys have quad monitors up and running with window maximizing on only one screen my xorg.conf looks like this: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 280.13 (buildd@allspice) Thu Aug 11 20:54:45 UTC 2011 # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 280.13 ([email protected]) Wed Jul 27 17:15:58 PDT 2011 Section "ServerLayout" # Removed Option "Xinerama" "1" # Removed Option "Xinerama" "0" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" 0 1080 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Samsung SMB2220N" HorizSync 31.0 - 80.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "Samsung SMB2220N" HorizSync 31.0 - 80.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 550 Ti" BusID "PCI:2:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 550 Ti" BusID "PCI:3:0:0" EndSection Section "Screen" # Removed Option "TwinView" "True" # Removed Option "MetaModes" "nvidia-auto-select, nvidia-auto-select" # Removed Option "metamodes" "CRT-0: nvidia-auto-select 1920x1080 +0+0, CRT-1: nvidia-auto-select 1920x1080 +1920+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "CRT-0" Option "metamodes" "CRT-0: nvidia-auto-select +0+0, CRT-1: nvidia-auto-select +1920+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" # Removed Option "TwinView" "True" # Removed Option "MetaModes" "nvidia-auto-select, nvidia-auto-select" # Removed Option "metamodes" "CRT-0: nvidia-auto-select 1920x1080 +0+0, CRT-1: nvidia-auto-select 1920x1080 +1920+0" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "CRT-0" Option "metamodes" "CRT-0: nvidia-auto-select +0+0, CRT-1: nvidia-auto-select +1920+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection

    Read the article

  • How can architects work with self-organizing Scrum teams?

    - by Martin Wickman
    An organization with a number of agile Scrum teams also has a small group of people appointed as "enterprise architects". The EA group acts as control and gatekeeper for quality and adherence to decisions. This leads to overlaps between the team decision and EA decisions. For instance, the team might want to use library X or want to use REST instead of SOAP, but the EA does not approve of that. Now, this can lead to frustration when team decisions are overruled. Taken far enough, it can potentially lead to a situation where the EA people "grabs" all power and the team ends up feeling demotivated and not very agile at all. The Scrum guides has this to say about it: Self-organizing: No one (not even the Scrum Master) tells the Development Team how to turn Product Backlog into Increments of potentially releasable functionality. Is that reasonable? Should the EA team be disbanded? Should the teams refuse or simply comply?

    Read the article

  • Amazon SOA: database as a Service

    - by Martin Lee
    There is an interesting interview with Werner Vogels which is partly about how Amazon does Service Oriented Architecture: For us service orientation means encapsulating the data with the business logic that operates on the data, with the only access through a published service interface. No direct database access is allowed from outside the service, and there’s no data sharing among the services. I do not understand that. Why do they need to 'wrap' a database into some layer if it already can be consumed as a service by other service through database adaptors? Does Amazon do that just because they need to expose the database to third parties or because of anything else? Why "no direct database access is allowed"? What are the advantages of such an architectural decision?

    Read the article

  • Why you need to tag your build servers in TFS

    - by Martin Hinshelwood
    At SSW we use gated check-in for all of our projects. The benefits are based on the number of developers you have working on your project. Lets say you have 30 developers and each developer breaks the build once per month. That could mean that you have a broken build every day! Gated check-ins help, but they have a down side that manifests as queued builds and moaning developers. The way to combat this is to have more build servers, but with that comes complexity. Inevitably you will need to install components that you would expect to be installed on target computers, but how do you keep track of which build servers have which bits? What about a geographically diverse team? If you have a centrally controlled infrastructure you might have build servers in multiple regions and you don’t want teams in Sydney copying files from Beijing and vice a versa on a regular basis. So, what is the answer. Its Tags. You can add a set of Tags to your agents and then set which tags to look for in the build definition. Figure: Open up your Build Controller Manager Select “Build | Manage Build Controllers…” to get a list of all of your controllers and he build agents that are associated with them. Figure: the list of build agents and their controllers Each of these Agents might be subtly different. For example only one of these agents has FTP software installed. This software is required for only one of the many builds we have set up. My ethos for build servers is to keep them as clean as possible and not to install anything that is not absolutely necessary. For me that means anything that does not add a *.target file is suspect, and should really be under version control and called via the command line from there. So, some of the things you may install are: Silverlight 4 SDK Visual Studio 2010 Visual Studio 2008 WIX etc You should not install things that will not end up on the target users computer. For a website that means something different to a client than to a server, but I am sure you get the idea. One thing you can do to make things easier is to create a tag for each of the things that you install. that way developers can find the things they need. We may change to using a more generic tagging structure (Like “Web Application” or “WinForms Application”) if this gets too unwieldy, but for now the list of tags is limited. Figure: Tags associated with one of our build agents Once you have your Build Agents all tagged up ALL your builds will start to fail This is because the default setting for a build is to look for an Agent that exactly matches the tags for the build, and we have not added any yet. The quick way to fix this is to change the “Tag Comparison Operator” from “ExactMatch” to “MatchAtLease” to get your build immediately working. Figure: Tag Comparison Operator changes to MatchAtLeast to get builds to run. The next thing to do is look for specific tags. You just select from the list of available tags and the controller will make sure you get to a build agent that uses them. Figure: I want Silverlight, VS2010 and WIX, but do not care about Location. And there you go, you can now have build agents for different purposes and regions within the same environment. You can also use name filtering, so if you have a good Agent naming convention you can filter by that for regions. For example, your Agents might be “SYDVMAPTFSBP01” and “SYDVMAPTFSBP02” so a name filter of “SYD*” would target all of the Sydney build agents. Figure: Agent names can be used for filtering as well This flexibility will allow you to build better software by reducing the likelihood of not having a certain dependency on the target machines. Figure: Setting the name filter based on server location  Used in combination there is a lot of power here to coordinate tens of build servers for multiple projects across multiple regions so your developers get the most out of your environment. Technorati Tags: ALM,TFBS,TFS 2010,TFS Admin

    Read the article

  • Silverlight 4, MVVM and Test-Driven Development

    - by Martin Hinshelwood
    As part of his UK tour Microsoft's Jesse Liberty will be talking in Edinburgh for an evening on Silverlight 4. [Register Now, there are some places left]  The Talk MVVM and Silverlight to build test-driven programs Understanding Refactoring and Dependency Injection A Walk through of a non-trivial application The Speaker Jesse Liberty, Silverlight Geek, is a Developer Community Program Manager for Microsoft (US). Lately he has been focused on Component-based, Test-Driven, Cross-platform line-of-business application development, and has led the development of the open source  Silverlight HyperVideo Platform. Liberty is the author of over two dozen books, and his blog is a required resource for Silverlight programmers. His twenty years of programming experience include stints as a Distinguished Software Engineer at AT&T; Vice President of Human-Computer Interaction at Citibank and Software Architect at PBS/Learning Link. The Venue We are meeting at Microsoft's offices in Edinburgh in Waterloo Place. This is the building on the corner of North Bridge at the east end of Princes Street. Parking can be found at the nearby Greenside Row car park which is just off Leith Walk (used for the Omni Centre). The venue is approximately 2-3 minutes walk away from Edinburgh Waverly train station. The Agenda 18:30 Doors open 19:00 Welcome 19:10 Part 1 20:00 Break 20:10 Part 2 20:50 Feedback and Prizes 21:00 End   [Register Now, there are some places left] Technorati Tags: Silverlight,MVVM,TDD

    Read the article

  • Creating a branch for every Sprint

    - by Martin Hinshelwood
    There are a lot of developers using version control these days, but a feature of version control called branching is very poorly understood and remains unused by most developers in favour of Labels. Most developers think that branching is hard and complicated. Its not! What is hard and complicated is a bad branching strategy. Just like a bad software architecture a bad branch architecture, or one that is not adhered to can prove fatal to a project. We I was at Aggreko we had a fairly successful Feature branching strategy (although the developers hated it) that meant that we could have multiple feature teams working at the same time without impacting each other. Now, this had to be carefully orchestrated as it was a Business Intelligence team and many of the BI artefacts do not lend themselves to merging. Today at SSW I am working on a Scrum team delivering a product that will be used by many hundreds of developers. SSW SQL Deploy takes much of the pain out of upgrading production databases when you are not using the Database projects in Visual Studio. With Scrum each Scrum Team works for a fixed period of time on a single sprint. You can have one or more Scrum Teams involved in delivering a product, but all the work must be merged and tested, ready to be shown to the Product Owner at the the Sprint Review meeting at the end of the current Sprint. So, what does this mean for a branching strategy? We have been using a “Main” (sometimes called “Trunk”) line and doing a branch for each sprint. It’s like Feature Branching, but with only ONE feature in operation at any one time, so no conflicts Figure: DEV folder containing the Development branches.   I know that some folks advocate applying a Label at the start of each Sprint and then rolling back if you need to, but I have always preferred the security of a branch. Like: being able to create a release from Main that has Sprint3 code even while Sprint4 is being worked on. being sure I can always create a stable build on request. Being able to guarantee a version (labels are not auditable) Be able to abandon the sprint without having to delete the code (rare I know, but would be a mess if it happened) Being able to see the flow of change sets through to a safe release It helps you find invalid dependencies when merging to Main as there may be some file that is in everyone’s Sprint branch, but never got checked in. (We had this at the merge of Sprint2) If you are always operating in this way as a standard it makes it easier to then add more scrum teams in the future. Muscle memory of this way of working. Don’t Like: Additional DB space for the branches Baseless merging between sprint branches when changes are directly ported Note: I do not think we will ever attempt this! Maybe a bit tougher to see the history between sprint branches since the changes go up through Main and down to another sprint branch Note: What you would have to do is see which Sprint the changes were made in and then check the history he same file in that Sprint. A little bit of added complexity that you would have to do anyway with multiple teams. Over time, you can end up with a lot of old unused sprint branches. Perhaps destroy with /keephistory can help in this case. Note: We ALWAYS delete the Sprint branch after it has been merged into Main. That is the theory anyway, and as you can see from the images Sprint2 has already been deleted. Why take the chance of having a problem rolling back or wanting to keep some of the code, when you can just abandon a branch and start a new one? It just seems easier and less painful to use a branch to me! What do you think?   Technorati Tags: TFS,TFS2010,Software Development,ALM,Branching

    Read the article

  • Fastest pathfinding for static node matrix

    - by Sean Martin
    I'm programming a route finding routine in VB.NET for an online game I play, and I'm searching for the fastest route finding algorithm for my map type. The game takes place in space, with thousands of solar systems connected by jump gates. The game devs have provided a DB dump containing a list of every system and the systems it can jump to. The map isn't quite a node tree, since some branches can jump to other branches - more of a matrix. What I need is a fast pathfinding algorithm. I have already implemented an A* routine and a Dijkstra's, both find the best path but are too slow for my purposes - a search that considers about 5000 nodes takes over 20 seconds to compute. A similar program on a website can do the same search in less than a second. This website claims to use D*, which I have looked into. That algorithm seems more appropriate for dynamic maps rather than one that does not change - unless I misunderstand it's premise. So is there something faster I can use for a map that is not your typical tile/polygon base? GBFS? Perhaps a DFS? Or have I likely got some problem with my A* - maybe poorly chosen heuristics or movement cost? Currently my movement cost is the length of the jump (the DB dump has solar system coordinates as well), and the heuristic is a quick euclidean calculation from the node to the goal. In case anyone has some optimizations for my A*, here is the routine that consumes about 60% of my processing time, according to my profiler. The coordinateData table contains a list of every system's coordinates, and neighborNode.distance is the distance of the jump. Private Function findDistance(ByVal startSystem As Integer, ByVal endSystem As Integer) As Integer 'hCount += 1 'If hCount Mod 0 = 0 Then 'Return hCache 'End If 'Initialize variables to be filled Dim x1, x2, y1, y2, z1, z2 As Integer 'LINQ queries for solar system data Dim systemFromData = From result In jumpDataDB.coordinateDatas Where result.systemId = startSystem Select result.x, result.y, result.z Dim systemToData = From result In jumpDataDB.coordinateDatas Where result.systemId = endSystem Select result.x, result.y, result.z 'LINQ execute 'Fill variables with solar system data for from and to system For Each solarSystem In systemFromData x1 = (solarSystem.x) y1 = (solarSystem.y) z1 = (solarSystem.z) Next For Each solarSystem In systemToData x2 = (solarSystem.x) y2 = (solarSystem.y) z2 = (solarSystem.z) Next Dim x3 = Math.Abs(x1 - x2) Dim y3 = Math.Abs(y1 - y2) Dim z3 = Math.Abs(z1 - z2) 'Calculate distance and round 'Dim distance = Math.Round(Math.Sqrt(Math.Abs((x1 - x2) ^ 2) + Math.Abs((y1 - y2) ^ 2) + Math.Abs((z1 - z2) ^ 2))) Dim distance = firstConstant * Math.Min(secondConstant * (x3 + y3 + z3), Math.Max(x3, Math.Max(y3, z3))) 'Dim distance = Math.Abs(x1 - x2) + Math.Abs(z1 - z2) + Math.Abs(y1 - y2) 'hCache = distance Return distance End Function And the main loop, the other 30% 'Begin search While openList.Count() != 0 'Set current system and move node to closed currentNode = lowestF() move(currentNode.id) For Each neighborNode In neighborNodes If Not onList(neighborNode.toSystem, 0) Then If Not onList(neighborNode.toSystem, 1) Then Dim newNode As New nodeData() newNode.id = neighborNode.toSystem newNode.parent = currentNode.id newNode.g = currentNode.g + neighborNode.distance newNode.h = findDistance(newNode.id, endSystem) newNode.f = newNode.g + newNode.h newNode.security = neighborNode.security openList.Add(newNode) shortOpenList(OLindex) = newNode.id OLindex += 1 Else Dim proposedG As Integer = currentNode.g + neighborNode.distance If proposedG < gValue(neighborNode.toSystem) Then changeParent(neighborNode.toSystem, currentNode.id, proposedG) End If End If End If Next 'Check to see if done If currentNode.id = endSystem Then Exit While End If End While If clarification is needed on my spaghetti code, I'll try to explain.

    Read the article

  • Scrum with Team Foundation Server 2010 Done

    - by Martin Hinshelwood
    Since I have joined SSW as a Solution Architect its Chief Architect, Adam Cogan, has been mentoring me and pushing me to do better. One of the things that I have been wanting to do since the first DDD Scotland was to present a session. For DDD Scotland 2010 Adam suggested that I submit he double session on “Better project Management with Team Foundation Server 2010”. So, with some apprehension I submitted two session as Part A and Part B. Download DDD Scotland -  Scrum with Team Foundation Server 2010 How surprised was I that after the attendees had finished casting their votes that both sessions would be in the top 20 one in the top 5. I an effort to promote diversity in sessions the DDD committee try to make sure that each presenter only have one session. I would have to compress SSW’s presentation into 1 hour. Around this time SSW embarked on it continuing adventures with scrum an Microsoft started heavily investing in Scrum for its internal use. I decided to do a slightly different session, but one that would still meet the agenda and goal of the billed session to provide “Better project management with Team Foundation Server 2010”. And so Scrum with Team Foundation Server 2010 was born. At this stage I really have to thank Aaron Bjork who provided me with many of the slides and animations as I really can’t work Power Point. On the 27th of April I presented the session for the Aberdeen Partner Group and then on 8th May I presented at DDD Scotland. Figure: Some of the presenters and organisers of DDD Scotland I mentioned quite a few of SSW’s Rules to better Scrum Using TFS and I have uploaded my presentation to Skydrive.   Download DDD Scotland -  Scrum with Team Foundation Server 2010 Technorati Tags: DDD Scot,Scrum,TFS 2010,SSW

    Read the article

  • AS3 Stage3D Mouse click problem?

    - by Martin K
    I have a problem with Mouse interaction and Stage3D. The only way I found to register to listen to mouse clicks and interact with Stage3D, is to add a mouse eventListener directly to the .stage. However this will result in any time i click anywhere in the flash application the mouse click will fire, even if there is an overlaid 2D menu where the user intended to click. IE I have a 3D application running in the background, which listens to clicks, and I have some floating User Interface elements in the foreground, and ideally if I clicked a button in the foreground, then that would NOT fire a click event that the Stage3D would register. Any idea how to solve this problem?

    Read the article

  • What have we lost from computers 20 years ago

    - by Martin Beckett
    Following the theme of - I can't believe computers used floppy disks and did you have debuggers 20 years ago - questions (and because even putting my age in hex I don't look young anymore). What have we lost from computers 20years ago? VMS versioning file system. Make a mistake, no problem myprog.cpp;5 is still there. Home computers that powered up to give you a wordprocessor or a basic prompt in <0.5sec User ports, serial ports and ADCs that you could control things with - without an arduino APIs that actually did what they said (even DOS interrupts), without having to guess and experiment your way through layers of frameworks.

    Read the article

  • Programa Talleres FMW Mayo y Junio 2010

    - by [email protected]
    PROGRAMA TALLERES FMW Mayo y Junio 2010 Enterprise 2.0 TALLER FECHA LOCALIZACIÓN Enterprise 2.0 y Redes Sociales Empresariales (Webcenter Spaces) 03/05/10 Madrid Digitalización (IP/M) 10/05/10 Madrid Gestión Documental y Records Management (UCM/URM) 11/05/10 Barcelona Gestión de Contenidos Web y portales (UCM + WC Suite) 25/05/10 Barcelona Gestión Documental y Records Management (UCM/URM) 19/05/10 Madrid Gestión de Contenidos Web y portales (UCM + WC Suite) 31/05/10 Madrid Service Oriented Architecture (SOA) TALLER FECHA LOCALIZACIÓN Construccion de Modelos de Negocio con BPEL 11g 13/05/10 Madrid Automatización de Procesos de Negocio con Oracle BPM 20/05/10 Madrid Oracle WebLogic 27/05/10 Madrid Gestión de Ciclo de Vida SOA Sobre un Repositorio Empresarial 11/05/10 Madrid Desarrollo de Aplicaciones de Alto Rendimiento con Oracle Coherence 18/05/10 Madrid Plataforma de Integración de Datos (ODI) 25/05/10 Madrid Business Activity Monitoring 11g (BAM) 13/05/10 Barcelona Inscribirse:

    Read the article

  • A talk about observer pattern

    - by Martin
    As part of a university course I'm taking, I have to hold a 10 minute talk about the observer pattern. So far these are my points: What is it? (defenition) Polling is an alternative Polling VS Observer When Observer is better When Polling is better (taken from here) A statement that Mediator pattern is worth checking out. (I won't have time to cover it in 10 minutes) I would be happy to hear about: Any suggestions to add something? Another alternative (if exists)

    Read the article

  • Sony Vaio Webcam

    - by Martin H
    I have a in-built webcam in my Sony Vaio VGN-FE21M. lsusb shows me the device Bus 001 Device 003: ID 0ac8:c002 Z-Star Microelectronics Corp. Visual Communication Camera VGP-VCC1 and it is working within Skype most of the time. Sometimes, however, lsusb shows me the exact same output, but trying to test my cam in v4l2ucp I get the error Unable to open file /dev/video0 No such file or directory A reboot fixes the problem but I just can't pinpoint what the difference is between a working and a not working webcam and the time/instance this occurs. It would probably be a fix if i could unmount and remount the cam, but how can I do this with in-built devices? Any other advice is welcome as well.

    Read the article

  • How to enable Unity 3D support in 12.04 using open-source drivers for RadeonHD cards?

    - by martin
    As the title says I can't enable the Unity 3D support when I'm using open-source drivers (xorg-edgers). I have an xfx Radeon HD 6950 by the way. If I install the proprietary 12.3 drivers from AMD it works, but I get poorer 2D performance than the open-source drivers and also I get some freezes and lock ups at random. So because of this I'm trying the open-source drivers and so far no issues at all, except this one. Running this command $ /usr/lib/nux/unity_support_test -p shows this: OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x300) OpenGL version string: 2.1 Mesa 8.0.2 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no And this command $ lspci -nn | grep VGA shows: 01:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Cayman PRO [Radeon HD 6950] [1002:6719] So, is this normal? Do I need to go back to proprietary drivers to enable Unity 3D? If anyone can give me help, I'll much appreciate it.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >