Search Results

Search found 945 results on 38 pages for 'patrick burton'.

Page 5/38 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • rsnapshot intervals in configuration file…

    - by Patrick
    A simple question about rsnapshot. In order to perform daily backups I'm going to add lines to cron in my Ubuntu. Then, why do I have also these lines in the rsnapshot.conf ? ######################################### # BACKUP INTERVALS # # Must be unique and in ascending order # # i.e. hourly, daily, weekly, etc. # ######################################### interval hourly 6 interval daily 7 interval weekly 4 #interval monthly 3 If I use cron, should I disable them ? thanks ps. I've just realized that in the crontab I still have "hourly" and "daily". Should I then uncomment only the one I use in the crontab ? And what's the point to specify hourly if it is already specified in cron ? I'm a bit confused. # crontab -e 0 */4 * * * /usr/local/bin/rsnapshot hourly 30 23 * * * /usr/local/bin/rsnapshot daily

    Read the article

  • DevExpress XAF, Behavior Driven Development (BDD), Domain Driven Development (DDD) and more&ndash;Introduction

    - by Patrick Liekhus
    OK.  I admit it.  I have been horrible at this blogging thing.  However, I have made a commitment to get better at it so here goes.  I have many crazy ideas when it comes to coding and how to make my processes better and now is the time to get them down on paper and get your feedback.  Now, these ideas might not be nearly as wild and crazy as Charlie Sheen, but at least they help me get through my coding assignments. So let’s start by laying out the vision and objectives of this exercise.  I have been trying to come up with the best set of tools, tips and practices so I can get a small team to be as productive as possible without burning out my resources.  My thoughts tend to lean towards the coding practices first as this is what I have been doing for years.  However, as one looks at the process as a whole, we need to remember to keep the users in mind.  If we don’t have a user to accept our application, do we really have an application in the first place? I have been using a commercial framework from DevExpress called eXpress Application Framework (XAF) with their eXpress Persistent Objects (XPO) behind the scenes for a few years.  We have had tremendous success with it and even implemented a code generation layer to save us some time.  Now we want more!!! My goals here are to create a technical stack that employs as many UI’s as possible, while being true to the layers and documenting the process along the way.  I will continue to have a series of these posts that will walk through each step as I work on it.  Right now here is what I have planned: Defining the solution SCRUM/Agile Story Planning Overview of Architectural Plan Feature Driven Development Domain Driven Development Persistence Layer with XPO Windows UI with XAF/XPO Web UI with XAF/XPO OData Services Layer Windows Mobile UI Android UI iPhone UI Blackberry UI Excel UI Outlook UI Lessons Learned I will explain the solution that I plan to implement in the next post.  Thanks again and let me know what you think.

    Read the article

  • Defining the Features we would like to see

    - by Patrick Liekhus
    OK, now that we have a very rough idea of what we are building, let’s get a list of the top features that this application needs to allow us to do.  In this next list we are not prioritizing them yet, just getting on paper the high level backlog of items that this system must do. Add a new task to my work queue Change the status of the task Print a hard copy of the task list by day for my records Log a phone conversation A manager should be able to assign tasks to another user How do we login? Change the Covey roles per user Manage the statuses used Manage the Covey quadrants Can we make this available on the following user interfaces? Windows Desktop Web Browser Sliverlight (WPF) Excel Add-in Outlook Add-in Android Devices iPhone Devices Windows Mobile Devices Blackberry Devices While this looks like a simple spread sheet, it can get pretty complex and busy quickly.  Next time we will work on making this into a Product Backlog and prioritizing the features we would like to see.

    Read the article

  • XAF DSL Tool Needs a new Team Lead

    - by Patrick Liekhus
    I have enjoyed my time on this project and have used it in several production projects.  However, with the enhancements in Visual Studio 2010 and the Entity Framework, the DSL tool doesn’t make sense for me to support at this time.  With that said, I am looking for someone who has interest to continue the project if they so desire.  I have moved my attention to creating a new project at Entity Framework Extensions for XAF.  We are converting the current DSL tool into the Entity Framework extensions.  The same code generation and everything else work.  However, the visual design surface is so much easier to work with.  If you have any questions, please let me know.  Also, please take a moment to look at the new project.  This is where all my effort going forward will be focused. Thanks again for all the support on my vision this far and enjoy.

    Read the article

  • What installation size should i use?

    - by Patrick
    I am installing Ubuntu with the Windows installer, however I am using Windows 8. I need to know if there will be problems with installing it using Windows 8, and I need to know which installation size I should use........4gb....18gb....etc. Also, is it possible to actually install it on a usb drive? the option was available in the installation, but i was not sure if it was safe or not. I would really appreciate some answers.

    Read the article

  • what are the benefits of closure, primarily for PHP?

    - by Patrick
    I am beginning the process of moving code over to PHP 5.3 and one of the most highly touted features of PHP 5.3 is the ability to use closures. My understanding of closures is that they allow anonymous functions, can be assigned to variable names, and have interesting scoping abilities. From my point of view the only seeming benefits in real world applications is the reduction of clutter in the namespace because closures are anonymous. Am I wrong in this? Should I be trying to put closures wherever I code? EDIT: I have already read this post on Javascript closures.

    Read the article

  • Best Books of C

    - by Patrick
    Hi, I realy want to get high skills in C programming and I know that the best and only way is hard work and lots of practice. though I found so many tutorials and books available on the net about learning the C language. I'm just looking for one or two good books in C that I can learn from and get high skills in C. Anyone knows such a great book/books for C programming pls? (sorry for replication if the question exists already in the forum) Regards!

    Read the article

  • NDepend 4 – First Steps

    - by Ricardo Peres
    Introduction Thanks to Patrick Smacchia I had the chance to test NDepend 4. I can only say: awesome! This will be the first of a series of posts on NDepend, where I will talk about my discoveries. Keep in mind that I am just starting to use it, so more experienced users may find these too basic, I just hope I don’t say anything foolish! I must say that I am in no way affiliated with NDepend and I never actually met Patrick. Installation No installation program – a curious decision, I’m not against it -, just unzip the files to a folder and run the executable. It will optionally register itself with Visual Studio 2008, 2010 and 11 as well as RedGate’s Reflector; also, it automatically looks for updates. NDepend can either be used as a stand-alone program (with or without a GUI) or from within Visual Studio or Reflector. Getting Started One thing that really pleases me is the Getting Started section of the stand-alone, with links to pages on NDepend’s web site, featuring detailed explanations, which usually include screenshots and small videos (<5 minutes). There’s also an How do I with hierarchical navigation that guides us to through the major features so that we can easily find what we want. Usage There are two basic ways to use NDepend: Analyze .NET solutions, projects or assemblies; Compare two versions of the same assembly. I have so far not used NDepend to compare assemblies, so I will first talk about the first option. After selecting a solution and some of its projects, it generates a single HTML page with an highly detailed report of the analysis it produced. This includes some metrics such as number of lines of code, IL instructions, comments, types, methods and properties, the calculation of the cyclomatic complexity, coupling and lots of others indicators, typically grouped by type, namespace and assembly. The HTML also includes some nice diagrams depicting assembly dependencies, type and method relative proportions (according to the number of IL instructions, I guess) and assembly analysis relating to abstractness and stability. Useful, I would say. Then there’s the rules; NDepend tests the target assemblies against a set of more than 120 rules, grouped in categories Code Quality, Object Oriented Design, Design, Architecture and Layering, Dead Code, Visibility, Naming Conventions, Source Files Organization and .NET Framework Usage. The full list can be configured on the application, and an explanation of each rule can be found on the web site. Rules can be validated, violated and violated in a critical manner, and the HTML will contain the violated rules, their queries – more on this later - and results. The HTML uses some nice JavaScript effects, which allow paging and sorting of tables, so its nice to use. Similar to the rules, there are some queries that display results for a number (about 200) questions grouped as Object Oriented Design, API Breaking Changes (for assembly version comparison), Code Diff Summary (also for version comparison) and Dead Code. The difference between queries and rules is that queries are not classified as passes, violated or critically violated, just present results. The queries and rules are expressed through CQLinq, which is a very powerful LINQ derivative specific to code analysis. All of the included rules and queries can be enabled or disabled and new ones can be added, with intellisense to help. Besides the HTML report file, the NDepend application can be used to explore all analysis results, compare different versions of analysis reports and to run custom queries. Comparison to Other Analysis Tools Unlike StyleCop, NDepend only works with assemblies, not source code, so you can’t expect it to be able to enforce brackets placement, for example. It is more similar to FxCop, but you don’t have the option to analyze at the IL level, that is, other that the number of IL instructions and the complexity. What’s Next In the next days I’ll continue my exploration with a real-life test case. References The NDepend web site is http://www.ndepend.com/. Patrick keeps an updated blog on http://codebetter.com/patricksmacchia/ and he regularly monitors StackOverflow for questions tagged NDepend, which you can find on http://stackoverflow.com/questions/tagged/ndepend. The default list of CQLinq rules, queries and statistics can be found at http://www.ndepend.com/DefaultRules/webframe.html. The syntax itself is described at http://www.ndepend.com/Doc_CQLinq_Syntax.aspx and its features at http://www.ndepend.com/Doc_CQLinq_Features.aspx.

    Read the article

  • Installing APC on lighttpd + php 5.2 on Ubuntu 10

    - by Patrick
    I've found the following tutorial to install APC on servers with lighttpd + php 5.2 on Ubuntu 10: http://www.assembla.com/wiki/show/socialinguatribe/Integrating_APC_Into_PHP5_And_Lighttpd However, when I run "sudo pecl install apc" the package is just downloaded and is not installed. (i.e. I'm not asked the next question" and apc.ini file is not created at all. If I run only "pecl install apc" I get a warning (no permissions to write some files). thanks

    Read the article

  • I Admit I Misspoke

    - by Patrick Liekhus
    OK.  I admit it.  The last post I hade mentioned that we moved the XAF DSL to the Entity Framework.  This has caused a lot of confusion.  I meant to say that we have used the ADO.NET Entity Data Model extensions.  This is the design surface that can be tailored to create Entity Framework. We leveraged the code generation within the ADO.NET Entity Data Model (EDMX) file to generate XAF/XPO classes.  This allows you to visually create the entity model, set a few XAF properties and then generate the business objects from there.  I am presenting all these topics at the Kansas City Developers Conference on June 19th.  I will post the presentation after the conference.  I have a full presentation that will demonstrate the power of the ADO.NET Entity Data Model extensions, create a small project and then add the OData layer to XAF to connect to the PowerPivot in Excel 2010. The latest code can be found at http://efxaf.codeplex.com. More details to come soon.  Sorry for the confusion in the last post. Thanks again.

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Wolfram is out, any alternatives? Or how to go custom?

    - by Patrick
    We were originally planning on using wolfram alpha api for a new project but unfortunately the cost was entirely way to high for what we were using it for. Essentially what we were doing is calculating the nutrition facts for food. (http://www.wolframalpha.com/input/?i=chicken+breast+with+broccoli). Before taking the step of trying to build something that may work in its place for this use case is there any open source code anywhere that can do this kind of analysis and compile the data? The hardest part in my opinion is what it has for assumptions and where it gets that data to power the calculations. Or another way to put it is, I cannot seem to wrap my head around building something that computes user input to return facts and knowledge. I know if I can convert the user input into some standardized form I can then compare that to a nutrition fact database to pull in the information I need. Does anyone know of any solutions to re-create this or APIs that can provide this kind of analysis? Thanks for any advice. I am trying to figure out if this project is dead in the water before it even starts. This kind of programming is well beyond me so I can only hope for an API, open source, or some kind of analysis engine to interpret user input when I know what kind of data they are entering (measurements and food).

    Read the article

  • Work Item Traceability in TFS 2010

    - by Sam Patrick
    I have created a Windows Form project (VS solution) under a TFS 2010 project. I may eventually add more solutions to the TFS project. My question: Can we create a Use Case WIT for a specific solution within a TFS project? Furthermore, is it possible to create a "traceability matrix" that starts at the Use Case level and goes down to the the code level (at least the namespace level) of that particular VS solution?

    Read the article

  • Error correlation ID when trying to create a BI center site in Sharepoint 2010

    - by Patrick Olurotimi Ige
    Before you get to see the template to install the BI center site you will have to activate the PerformancePoint Services Site Collection Features from the : Site Collection Administration  > site collection features But after i activated it and try to create a site i get correlation error bla bla... After looking at the ULS log files i saw something related to not being able to use the SharePoint Server Publishing. So i went to the collection features and activate the SharePoint Server Publishing Infrastructure I could create a BI site

    Read the article

  • How to install Ubiquity into a Live CD installation image?

    - by Patrick L
    I am trying a create a small Ubuntu installation ISO image. I am using a tool called Ubuntu-Builder. To make the final ISO as small as possible, I have decided to use Ubuntu Mini Remix. It is a small Live CD without GUI. It does not come with any installer software like Ubiquity. I want to embed an installer software into the ISO image so that user can install it into harddisk. In Ubuntu-Builder, I have tried the following: Install LXDE Desktop, then install Ubiquity. But the final ISO boots into command line. Install OpenBox Desktop, then install Ubiquity. But the final ISO boots into command line. Do not install DE, directly install Ubiquity. But the final ISO still boots into command line. After booting up from ISO, I have checked the software in the OS. It seems that Ubiquity has been installed. But it didn't show up when I boot the ISO image. Anyone knows how to install Ubiquity into a Live CD ISO image? Anyone knows any text mode installer which can replace Ubiquity?

    Read the article

  • How I might think like a hacker so that I can anticipate security vulnerabilities in .NET or Java before a hacker hands me my hat [closed]

    - by Matthew Patrick Cashatt
    Premise I make a living developing web-based applications for all form-factors (mobile, tablet, laptop, etc). I make heavy use of SOA, and send and receive most data as JSON objects. Although most of my work is completed on the .NET or Java stacks, I am also recently delving into Node.js. This new stack has got me thinking that I know reasonably well how to secure applications using known facilities of .NET and Java, but I am woefully ignorant when it comes to best practices or, more importantly, the driving motivation behind the best practices. You see, as I gain more prominent clientele, I need to be able to assure them that their applications are secure and, in order to do that, I feel that I should learn to think like a malevolent hacker. What motivates a malevolent hacker: What is their prime mover? What is it that they are most after? Ultimately, the answer is money or notoriety I am sure, but I think it would be good to understand the nuanced motivators that lead to those ends: credit card numbers, damning information, corporate espionage, shutting down a highly visible site, etc. As an extension of question #1--but more specific--what are the things most likely to be seeked out by a hacker in almost any application? Passwords? Financial info? Profile data that will gain them access to other applications a user has joined? Let me be clear here. This is not judgement for or against the aforementioned motivations because that is not the goal of this post. I simply want to know what motivates a hacker regardless of our individual judgement. What are some heuristics followed to accomplish hacker goals? Ultimately specific processes would be great to know; however, in order to think like a hacker, I would really value your comments on the broader heuristics followed. For example: "A hacker always looks first for the low-hanging fruit such as http spoofing" or "In the absence of a CAPTCHA or other deterrent, a hacker will likely run a cracking script against a login prompt and then go from there." Possibly, "A hacker will try and attack a site via Foo (browser) first as it is known for Bar vulnerability. What are the most common hacks employed when following the common heuristics? Specifics here. Http spoofing, password cracking, SQL injection, etc. Disclaimer I am not a hacker, nor am I judging hackers (Heck--I even respect their ingenuity). I simply want to learn how I might think like a hacker so that I may begin to anticipate vulnerabilities before .NET or Java hands me a way to defend against them after the fact.

    Read the article

  • SharePoint 2010 Launch

    - by Patrick Olurotimi Ige
    Some great news for sharepoint developers,architect,consultants etc... May 12, 2010 is the official release date for SharePoint 2010 & Office 2010. Also, Microsoft announced their intent to RTM (Release to Manufacturing) for April 2010. read more here

    Read the article

  • rsnapshot for remote backups...

    - by Patrick
    I want to use rsnapshot to make backups from my production server to a remote backups server. Should I install rsnapshot on the remote backup server and not the production one, right ? rsnapshot is going to pull the files to backup from the production server and store them locally on the backup server ? I've just realized that I don't have sudo privilegies on the backup server. Does this mean I cannot use rsnapshot for remote backups ? thanks

    Read the article

  • Is it just me or is this a baffling tech interview question

    - by Matthew Patrick Cashatt
    Background I was just asked in a tech interview to write an algorithm to traverse an "object" (notice the quotes) where A is equal to B and B is equal to C and A is equal to C. That's it. That is all the information I was given. I asked the interviewer what the goal was but apparently there wasn't one, just "traverse" the "object". I don't know about anyone else, but this seems like a silly question to me. I asked again, "am I searching for a value?". Nope. Just "traverse" it. Why would I ever want to endlessly loop through this "object"?? To melt my processor maybe?? The answer according to the interviewer was that I should have written a recursive function. OK, so why not simply ask me to write a recursive function? And who would write a recursive function that never ends? My question: Is this a valid question to the rest of you and, if so, can you provide a hint as to what I might be missing? Perhaps I am thinking too hard about solving real world problems. I have been successfully coding for a long time but this tech interview process makes me feel like I don't know anything. Final Answer: CLOWN TRAVERSAL!!! (See @Matt's answer below) Thanks! Matt

    Read the article

  • Is there an established or defined best practice for source control branching between development and production builds?

    - by Matthew Patrick Cashatt
    Thanks for looking. I struggled in how to phrase my question, so let me give an example in hopes of making more clear what I am after: I currently work on a dev team responsible for maintaining and adding features to a web application. We have a development server and we use source control (TFS). Each day everyone checks in their code and when the code (running on the dev server) passes our QA/QC program, it goes to production. Recently, however, we had a bug in production which required an immediate production fix. The problem was that several of us developers had code checked in that was not ready for production so we had to either quickly complete and QA the code, or roll back everything, undo pending changes, etc. In other words, it was a mess. This made me wonder: Is there an established design pattern that prevents this type of scenario. It seems like there must be some "textbook" answer to this, but I am unsure what that would be. Perhaps a development branch of the code and a "release-ready" or production branch of the code?

    Read the article

  • I can't get Gutenprint to install with 13.10

    - by Patrick
    Canon pixma ip4200 printer... the install printer dialog offers gutenprint drivers then stalls out when I try to install them. Ran alien to unload Canons Linux drivers for the printeraqnd got this message: patrickxxxAspire-5253:~/Downloads$ sudo alien cnijfilter-common-2.60-4.src.rpm cnijfilter-common-2.60-4.src.rpm is for architecture i386 ; the package cannot be built on this system What am I to do???

    Read the article

  • Where to put code documentation?

    - by Patrick
    I am currently using two systems to write code documentation (am using C++): Documentation about methods and class members are added next to the code, using the Doxygen format. On a server Doxygen is run on the sources so the output can be seen in a web browser Overview pages (describing a set of classes, the structure of the application, example code, ...) is added to a Wiki I personally think that this approach is easy because the documentation about members and classes is really close to the code, while the overview pages are really easy to edit in the Wiki (and it's also easy to add images, tables, ...). A web browser allows you to see both documentations. My co-worker now suggests to put everything in Doxygen, because we can then create one big help file with everything in it (using either Microsoft's HTML WorkShop or Qt Assistant). My concern is that editing Doxygen-style documentation is much harder (compared to Wiki), especially when you want to add tables, images, ... (or is there a 'preview' tool for Doxygen that doesn't require you to generate the code before you can see the result?) What do big open-source (or closed source) projects use to write their code documentation? Do they also split this up between Doxygen-style and a Wiki? Or do they use another system? What is the most appropriate way to expose the documentation? Via a Web server/browser, or via a big (several 100MB) help file? Which approach do you take when writing code documentation?

    Read the article

  • Behavior Driven Development (BDD) and DevExpress XAF

    - by Patrick Liekhus
    So in my previous posts I showed you how I used EDMX to quickly build my business objects within XPO and XAF.  But how do you test whether your business objects are actually doing what you want and verify that your business logic is correct?  Well I was reading my monthly MSDN magazine last last year and came across an article about using SpecFlow and WatiN to build BDD tests.  So why not use these same techniques to write SpecFlow style scripts and have them generate EasyTest scripts for use with XAF.  Let me outline and show a few things below.  I plan on releasing this code in a short while, I just wanted to preview what I was thinking. Before we begin… First, if you have not read the article in MSDN, here is the link to the article that I found my inspiration.  It covers the overview of BDD vs. TDD, how to write some of the SpecFlow syntax and how use the “Steps” logic to create your own tests. Second, if you have not heard of EasyTest from DevExpress I strongly recommend you review it here.  It basically takes the power of XAF and the beauty of your application and allows you to create text based files to execute automated commands within your application. Why would we do this?  Because as you will see below, the cucumber syntax is easier for business analysts to interpret and digest the business rules from.  You can find most of the information you will need on Cucumber syntax within The Secret Ninja Cucumber Scrolls located here.  The basics of the syntax are that Given X When Y Then Z.  For example, Given I am at the login screen When I enter my login credentials Then I expect to see the home screen.  Pretty easy syntax to follow. Finally, we will need to download and install SpecFlow.  You can find it on their website here.  Once you have this installed then let’s write our first test. Let’s get started… So where to start.  Create a new testing project within your solution.  I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e.  AlbumManager.FunctionalTests).  Remove the basic test that is created for you.  We will not use the default test but rather create our own SpecFlow “Feature” files.  Add a new item to your project and select the SpecFlow Feature file under C#.  Name your feature file as you do your class files after the test they are performing. Now you can crack open your new feature file and write the actual test.  Make sure to have your Ninja Scrolls from above as it provides valuable resources on how to write your test syntax.  In this test below you can see how I defined the documentation in the Feature section.  This is strictly for our purposes of readability and do not effect the test.  The next section is the Scenario Outline which is considered a test template.  You can see the brackets <> around the fields that will be filled in for each test.  So in the example below you can see that Given I am starting a new test and the application is open.  This means I want a new EasyTest file and the windows application generated by XAF is open.  Next When I am at the Albums screen tells XAF to navigate to the Albums list view.  And I click the New:Album button, tells XAF to click the new button on the list grid.  And I enter the following information tells XAF which fields to complete with the mapped values.  And I click the Save and Close button causes the record to be saved and the detail form to be closed.  Then I verify results tests the input data against what is visible in the grid to ensure that your record was created. The Scenarios section gives each test a unique name and then fills in the values for each test.  This way you can use the same test to make multiple passes with different data. Almost there.  Now we must save the feature file and the BDD tests will be written using standard unit test syntax.  This is all handled for you by SpecFlow so just save the file.  What you will see in your Test List Editor is a unit test for each of the above scenarios you just built. You can now use standard unit testing frameworks to execute the test as you desire.  As you would expect then, these BDD SpecFlow tests can be automated into your build process to ensure that your business requirements are satisfied each and every time. How does it work? What we have done is to intercept the testing logic at runtime to interpret the SpecFlow syntax into EasyTest syntax.  This is the basic StepDefinitions that we are working on now.  We expect to put these on CodePlex within the next few days.  You can always override and make your own rules as you see fit for your project.  Follow the MSDN magazine above to start your own.  You can see part of our implementation below. As you can gather from the MSDN article and the code sample below, we have created our own common rules to build the above syntax. The code implementation for these rules basically saves your information from the feature file into an EasyTest file format.  It then executes the EasyTest file and parses the XML results of the test.  If the test succeeds the test is passed.  If the test fails, the EasyTest failure message is logged and the screen shot (as captured by EasyTest) is saved for your review. Again we are working on getting this code ready for mass consumption, but at this time it is not ready.  We will post another message when it is ready with all details about usage and setup. Thanks

    Read the article

  • In Scrum, should you split up the backlog in a functional backlog and a technical backlog or not?

    - by Patrick
    In our Scrum teams we use a backlog, which mostly contains functional topics, but also sometimes contains technical topics. The advantage of having 1 backlog is that it becomes easy to choose the topics for the next sprint, but I have some questions: First, to me it seems more logical to have a separate technical backlog, where developers themselves can add pure technical items, like: we could improve performance in this method, this class lacks some technical documentation, ... By having one backlog, all developers always have to pass via the product owner to have their topics added to the backlog, which seems additional, unnecessary work for the product owner. Second, if you have a product owner that only focuses on the pure-functional items, the pure-technical items (like missing technical documentation, code that erodes and should be refactored, classes that always give problems during debugging because they don't have a stable foundation and should be refactored, ...) always end up at the end of the list because "they don't serve the customer directly". By having a separate technical backlog, and time reserved in every sprint for these pure technical items, we can improve the applications functionally, but also keep them healthy inside. What is the best approach? One backlog or two?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >