Search Results

Search found 14702 results on 589 pages for 'testing logic'.

Page 139/589 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • How are software projects 'typically' managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is that we have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. You can think of this as analogous to moving some web pages from the 'test' server to the 'live' server, where QC personnel can bang on the system and complain that the developer has it all wrong ;-) Once QC signs off that the changes are working, the system again moves the code along to the next stage, where additional testing is performed, etc. I have been searching the internet for a few days trying to find how the process is accomplished anywhere else -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. To keep things consistent, let's say that we have a project called Calculator, which we want to add support for the basic trigonometric functions: sine, cosine and tangent. I'm open to reorganizing the company however we need to in order to accomplish due diligence testing and we can suppose that any tools are available for use (if that helps to illustrate the process). To start things off, I think I understand this much: we document the requirements, e.g.: support sine, cosine and tangent functions we create some type of change request/work order to assign to programming coding takes place, commits are made to version control peer review commences programmer marks the work order as completed? ... now what? How does QC do their thing? Would they perform testing before closing the 'work order'?

    Read the article

  • C# calendar needs Business Logic for real-time reminders? [on hold]

    - by lazfish
    I am not a super experienced C# user, though I have some experience in .Net and VB.Net. Just got this new job and my first assignment is a mission critical part of the business. It is pretty important I get it right so I was hoping for some sage advice. We have created a calendar using jQuery, C#, .Net & SQL Server 08. The calendar works but now we are wanting to add email, SMS and voice-call reminders. We are a small company and can do whatever we need to do with no restrictions on our IT environment. I have some base-line experience with Unix servers but would prefer to stay in the Micro$oft universe if it is prudent to do so. I know how to add the API calls and services to initiate these reminders (using built in email services for email and Twilio.com for SMS). I am asking for advice about how to approach a reliable and timely listener service that knows when to call the service or API for the reminder before an appointment. EG. SMS: "You have a conference call in 30 minutes." I have done some research, but it is hard to know what is a proven reliable approach. What I am looking for is an experienced opinion on a good (or the best) strategy for implementing a solution that will reliably listen for appointments and dispatch the reminders when needed.

    Read the article

  • How much business logic belongs in RIA services layer?

    - by jkohlhepp
    I have been experimenting recently with Silverlight, RIA Services, and Entity Framework using .NET 4.0. I'm trying to figure out if that stack makes sense for use in any of my upcoming projects. It certainly seems like these technologies can be very productive for developing applications, but I'm struggling to decide how an application on top of this stack should be architected. The main issue I have is that in most of the demos I've seen most of the business logic ends up as DataAnnotations and custom validations in the RIA Services domain service class. This seems inappropriate to me. I view the domain service as basically a glorified web service that happens to make it easy to push information to the client. But most of what I've seen seems to orient the domain service as the main source of business logic in the application. So, my questions: What is the best location for business logic (rules, validations, behaviors, authorization) in an application using this stack? Are there any guidelines published at an architectural level for using this stack? My questions pertain to large, complex, and long-lived applications. Obviously for an application of only a few screens this is less of a concern. Edit: Another thing I meant to mention is that obviously you can make the domain service class stupid, but then you lose a lot of the automagic entity information (e.g. validations) being pushed to the client. And then if you lose that is there any point to using RIA services?

    Read the article

  • Logic in the db for maintaining a points system relationship?

    - by MarcusBooster
    I'm making a little web based game and need to determine where to put logic that checks the integrity of some underlying data in the sql database. Each user keeps track of points assigned to him, and points are awarded by various tasks. I keep a record of each task transaction to make sure they're not repeated, and to keep track of the value of the task at the time of completion, since an individual award level my fluctuate over time. My schema looks like this so far: create table player ( player_ID serial primary key, player_Points int not null default 0 ); create table task ( task_ID serial primary key, task_PointsAwarded int not null ); create table task_list ( player_ID int references player(player_ID), task_ID int references task(task_ID), when_completed timestamp default current_timestamp, point_value int not null, --not fk because task value may change later constraint pk_player_task_id primary key (player_ID, task_ID) ); So, the player.player_Points should be the total of all his cumulative task points in the task_list. Now where do I put the logic to enforce this? Should I do away with player.player_Points altogether and do queries every time I want to know the total score? Which seems wasteful since I'll be doing that query a lot over the course of a game. Or, put a trigger in the task_list that automatically updates the player.player_Points? Is that too much logic to have in the database and should just maintain this relationship in the application? Thanks.

    Read the article

  • Should the program logic reside inside the gui object class or be external to the class?

    - by hd112
    I have a question about how to structure code in relation to GUI objects. Suppose I have a dialog that has a list control that has a bunch of names obtained from a database. The user can edit the names. Does the logic reside inside that dialog class or should it be from the outside. To illustrate what I mean, here’s some pseudo code showing the structure of the code when the logic is handled outside the dialog class: NamesDialog : wxDialog { Private: ..stuff.. Public: ... SetNames(wxStringArray names); wxStringArray GetNames(); ..stuff.. } So the user of the class would do something like: wxStringArray names = DatabaseManager::Get()->GetNames(); names.Sort(); NamesDialogObject.SetNames(names); NamesDialogObject.ShowModal(); wxStringArray modified_names = NamesDialogObject.GetNames(); AddToDatabase(modified_names); //or something like this. On the other hand, the database logic can reside inside the NamesDialog class itself. In the show method I can query the database for the names and as the user interacts with the controls (list control in this case), the database can be updated from the event handlers. As a result the NamesDialog class only has the Show() method as there is no need to use SetNames or GetNames() etc. Which method is generally preferred? I don’t have much work experience so I’m not sure which is the proper way to handle it. Sometimes it's easier to handle everything in the class but getting access to objects it interacts with can be challenging. Generally can do it by having the relevant objects be singletons like the database manager in the above example.

    Read the article

  • What is the logic behind IE7s Ctrl-Tab order?

    - by torbengb
    Using Ctrl+Tab in Internet Explorer 7 (on Windows XP) seems to follow no sequence that I can work out. Using Shift+Ctrl+Tab even jumps around with no clear logic in what tab order is being followed. It doesn't even work to the reverse as expected from using it in other programs, such as Firefox. Please explain to me how this key combo works in IE7 because I can't figure it out.

    Read the article

  • What's the logic behind the Ctrl-Tab order in Internet Explorer 7?

    - by torbengb
    Using Ctrl+Tab in Internet Explorer 7 (on Windows XP) seems to follow no sequence that I can work out. Using Shift+Ctrl+Tab even jumps around with no clear logic in what tab order is being followed. It doesn't even work to the reverse as expected from using it in other programs, such as Firefox. Please explain to me how this key combo works in IE7 because I can't figure it out.

    Read the article

  • What guidelines should be followed when using an unstable/testing/stable branching scheme?

    - by Elliot
    My team is currently using feature branches while doing development. For each user story in our sprint, we create a branch and work it in isolation. Hence, according to Martin Fowler, we practice Continuous Building, not Continuous Integration. I am interested in promoting an unstable/testing/stable scheme, similar to that of Debian, so that code is promoted from unstable = testing = stable. Our definition of done, I'd recommend, is when unit tests pass (TDD always), minimal documentation is complete, automated functional tests pass, and feature has been demo'd and accepted by PO. Once accepted by the PO, the story will be merged into the testing branch. Our test developers spend most of their time in this branch banging on the software and continuously running our automated tests. This scares me, however, because commits from another incomplete story may now make it into the testing branch. Perhaps I'm missing something because this seems like an undesired consequence. So, if moving to a code promotion strategy to solve our problems with feature branches, what strategy/guidelines do you recommend? Thanks.

    Read the article

  • AHK for creating folder + subfolders

    - by quanto
    I need an AHK script which creates: a folder in the currently open folder in Windows Explorer (under Windows 7), whose name consists the current date in the format (yyyy-mm-dd) the text which is currently in the clipboard the newly created folder must contain 3 subfolders, named "1", "2", and "3" I'd like to copy a few words (e.g. Testing Testing Testing) from another application, go to a location on my harddisk (using Windows Explorer), activate the hotkey, and AHK will create for me a folder named: 2012-06-04 Testing Testing Testing with subfolders "1", "2", and "3".

    Read the article

  • Linking LLVM JIT Code to Static LLVM Libraries?

    - by inflector
    I'm in the process of implementing a cross-platform (Mac OS X, Windows, and Linux) application which will do lots of CPU intensive analysis of financial data. The bulk of the analysis engine will be written in C++ for speed reasons, with a user-accessible scripting engine interfacing with the C++ testing engine. I want to write several scripting front-ends over time to emulate other popular software with existing large user bases. The first front will be a VisualBasic-like scripting language. I'm thinking that LLVM would be perfect for my needs. Performance is very important because of the sheer amount of data; it can take hours or days to run a single run of tests to get an answer. I believe that using LLVM will also allow me to use a single back-end solution while I implement different front-ends for different flavors of the scripting language over time. The testing engine itself will be separated from the interface and testing will even take place in a separate process with progress and results being reported to the testing management interface. Tests will consist of scripting code integrated with the testing engine code. In a previous implementation of a similar commercial testing system I wrote, I built a fast interpreter which easily interfaced with the testing library because it was written in C++ and linked directly to the testing engine library. Callbacks from scripting code to testing library objects involved translating between the formats with significant overhead. I'm imagining that with LLVM, I could implement the callbacks into C++ directly so that I could make the scripting code work almost as if it had been written in C++. Likewise, if all the code was compiled to LLVM byte-code format, it seems like the LLVM optimizers could optimize across the boundaries between the scripting language and the testing engine code that was written in C++. I don't want to have to compile the testing engine every time. Ideally, I'd like to JIT compile only the scripting code. For small tests, I'd skip some optimization passes, while for large tests, I'd perform full optimizations during the link. So is this possible? Can I precompile the testing engine to a .o object file or .a library file and then link in the scripting code using the JIT? Finally, ideally, I'd like to have the scripting code implement specific methods as subclasses for a specific C++ class. So the C++ testing engine would only see C++ objects while the JIT setup code compiled scripting code that implemented some of the methods for the objects. It seems that if I used the right name mangling algorithm it would be relatively easy to set up the LLVM generation for the scripting language to look like a C++ method call which could then be linked into the testing engine. Thus the linking stage would go in two directions, calls from the scripting language into the testing engine objects to retrieve pricing information and test state information and calls from the testing engine of methods of some particular C++ objects where the code was supplied not from C++ but from the scripting language. In summary: 1) Can I link in precompiled (either .bc, .o, or .a) files as part of the JIT compilation, code-generation process? 2) Can I link in code using that the process in 1) above in such a way that I am able to create code that acts as if it was all written in C++?

    Read the article

  • Visual Studio 2010 Service Pack 1 Released

    - by krislankford
    The VS 2010 SP 1 release was simultaneous to the release of TFS 2010 SP1 and includes support for the Project Server Integration Feature Pack and updates to .NET Framework 4.0. The complete Visual Studio SP1 list including Test and Lab Manager: http://support.microsoft.com/kb/983509 The release addresses some of the most requested features from customers of Visual Studio 2010 like better help support IntelliTrace support for 64bit and SharePoint Silverlight 4 Tools in the box unit testing support on .NET 3.5 a new performance wizard for Silverlight Another major addition is the announcement of Unlimited Load Testing for Visual Studio 2010 Ultimate with MSDN Subscribers! The benefits of Visual Studio 2010 Load Test Feature Pack and useful links: Improved Overall Software Quality through Early Lifecycle Performance Testing: Lets you stress test your application early and throughout its development lifecycle with realistically modeled simulated load. By integrating performance validations early into your applications, you can ensure that your solution copes with real-world demands and behaves in a predictable manner, effectively increasing overall software quality. Higher Productivity and Reduced TCO with the Ability to Scale without Incremental Costs: Development teams no longer have to purchase Visual Studio Load Test Virtual User Pack 2010. Download the Visual Studio 2010 Load Test Feature Pack Deployment Guide Get started with stress and performance testing with Visual Studio 2010 Ultimate: Quality Solutions Best Practice: Enabling Performance and Stress Testing throughout the Application Lifecycle Hands-On-Lab: Introduction to Load Testing with ASP.NET Profile in Visual Studio 2010 How-Do-I videos: Use ASP.NET Profiler in Load Tests Use Network Emulation in Load Tests VHD/VPC walkthrough: Getting Started with Load and Performance Testing Best Practice guidance: Visual Studio Performance Testing Quick Reference Guide

    Read the article

  • How do I refactor these two C# functions to abstrtact their logic from the specific class properties

    - by ObligatoryMoniker
    I have two functions whose underlying logic is the same but in one case it sets one property value on a class and in another case it sets a different one. How can I rewrite the following two functions to abstract away as much of the algorithm as possible so that I can make changes in logic in a single place? SetBillingAddress private void SetBillingAddress(OrderAddress newBillingAddress) { BasketHelper basketHelper = new BasketHelper(SiteConstants.BasketName); OrderAddress oldBillingAddress = basketHelper.Basket.Addresses[basketHelper.BillingAddressID]; bool NewBillingAddressIsNotOldBillingAddress = ((oldBillingAddress == null) || (newBillingAddress.OrderAddressId != oldBillingAddress.OrderAddressId)); bool BillingAddressHasBeenPreviouslySet = (oldBillingAddress != null); bool BillingAddressIsNotSameAsShippingAddress = (basketHelper.ShippingAddressID != basketHelper.BillingAddressID); bool NewBillingAddressIsNotShippingAddress = (newBillingAddress.OrderAddressId != basketHelper.ShippingAddressID); if (NewBillingAddressIsNotOldBillingAddress && BillingAddressHasBeenPreviouslySet && BillingAddressIsNotSameAsShippingAddress) { basketHelper.Basket.Addresses.Remove(oldBillingAddress); } if (NewBillingAddressIsNotOldBillingAddress && NewBillingAddressIsNotShippingAddress) { basketHelper.Basket.Addresses.Add(newBillingAddress); } basketHelper.BillingAddressID = newBillingAddress.OrderAddressId; basketHelper.Basket.Save(); } And here is the second one: SetShippingAddress private void SetBillingAddress(OrderAddress newShippingAddress) { BasketHelper basketHelper = new BasketHelper(SiteConstants.BasketName); OrderAddress oldShippingAddress = basketHelper.Basket.Addresses[basketHelper.ShippingAddressID]; bool NewShippingAddressIsNotOldShippingAddress = ((oldShippingAddress == null) || (newShippingAddress.OrderAddressId != oldShippingAddress.OrderAddressId)); bool ShippingAddressHasBeenPreviouslySet = (oldShippingAddress != null); bool ShippingAddressIsNotSameAsBillingAddress = (basketHelper.ShippingAddressID != basketHelper.BillingAddressID); bool NewShippingAddressIsNotBillingAddress = (newShippingAddress.OrderAddressId != basketHelper.BillingAddressID); if (NewShippingAddressIsNotOldShippingAddress && ShippingAddressHasBeenPreviouslySet && ShippingAddressIsNotSameAsBillingAddress) { basketHelper.Basket.Addresses.Remove(oldShippingAddress); } if (NewShippingAddressIsNotOldShippingAddress && NewShippingAddressIsNotBillingAddress) { basketHelper.Basket.Addresses.Add(newShippingAddress); } basketHelper.ShippingAddressID = newShippingAddress.OrderAddressId; basketHelper.Basket.Save(); } My initial thought was that if I could pass a class's property by refernce then I could rewrite the previous functions into something like private void SetPurchaseOrderAddress(OrderAddress newAddress, ref String CurrentChangingAddressIDProperty) and then call this function and pass in either basketHelper.BillingAddressID or basketHelper.ShippingAddressID as CurrentChangingAddressIDProperty but since I can't pass C# properties by reference I am not sure what to do with this code to be able to reuse the logic in both places. Thanks for any insight you can give me.

    Read the article

  • How to structure an enterprise MVC app, and where does Business Logic go?

    - by James
    I am an MVC newbie. As far as I can tell: Controller: deals with routing requests View: deals with presentation of data Model: looks a whole lot like a Data Access layer Where does the Business Logic go? Take a large enterprise application with: Several different sources of data (WCF, WebServices and ADO) tied together in a data access layer (useing multiple different DTOs). A lot business logic segmented over several dlls. What is an appropriate way for an MVC web application to sit on top of this (in terms of code and project structure)? The example I have seen where everything just goes in the Model folder don't seem like they are appropriate for very large applications. Thanks for any advice!

    Read the article

  • If you are forced to use an Anemic domain model, where do you put your business logic and calculated

    - by Jess
    Our current O/RM tool does not really allow for rich domain models, so we are forced to utilize anemic (DTO) entities everywhere. This has worked fine, but I continue to struggle with where to put basic object-based business logic and calculated fields. Current layers: Presentation Service Repository Data/Entity Our repository layer has most of the basic fetch/validate/save logic, although the service layer does a lot of the more complex validation & saving (since save operations also do logging, checking of permissions, etc). The problem is where to put code like this: Decimal CalculateTotal(LineItemEntity li) { return li.Quantity * li.Price; } or Decimal CalculateOrderTotal(OrderEntity order) { Decimal orderTotal = 0; foreach (LineItemEntity li in order.LineItems) { orderTotal += CalculateTotal(li); } return orderTotal; } Any thoughts?

    Read the article

  • How do I refactor these two C# functions to abstract their logic from the specific class properties

    - by ObligatoryMoniker
    I have two functions whose underlying logic is the same but in one case it sets one property value on a class and in another case it sets a different one. How can I rewrite the following two functions to abstract away as much of the algorithm as possible so that I can make changes in logic in a single place? SetBillingAddress private void SetBillingAddress(OrderAddress newBillingAddress) { BasketHelper basketHelper = new BasketHelper(SiteConstants.BasketName); OrderAddress oldBillingAddress = basketHelper.Basket.Addresses[basketHelper.BillingAddressID]; bool NewBillingAddressIsNotOldBillingAddress = ((oldBillingAddress == null) || (newBillingAddress.OrderAddressId != oldBillingAddress.OrderAddressId)); bool BillingAddressHasBeenPreviouslySet = (oldBillingAddress != null); bool BillingAddressIsNotSameAsShippingAddress = (basketHelper.ShippingAddressID != basketHelper.BillingAddressID); bool NewBillingAddressIsNotShippingAddress = (newBillingAddress.OrderAddressId != basketHelper.ShippingAddressID); if (NewBillingAddressIsNotOldBillingAddress && BillingAddressHasBeenPreviouslySet && BillingAddressIsNotSameAsShippingAddress) { basketHelper.Basket.Addresses.Remove(oldBillingAddress); } if (NewBillingAddressIsNotOldBillingAddress && NewBillingAddressIsNotShippingAddress) { basketHelper.Basket.Addresses.Add(newBillingAddress); } basketHelper.BillingAddressID = newBillingAddress.OrderAddressId; basketHelper.Basket.Save(); } And here is the second one: SetShippingAddress private void SetBillingAddress(OrderAddress newShippingAddress) { BasketHelper basketHelper = new BasketHelper(SiteConstants.BasketName); OrderAddress oldShippingAddress = basketHelper.Basket.Addresses[basketHelper.ShippingAddressID]; bool NewShippingAddressIsNotOldShippingAddress = ((oldShippingAddress == null) || (newShippingAddress.OrderAddressId != oldShippingAddress.OrderAddressId)); bool ShippingAddressHasBeenPreviouslySet = (oldShippingAddress != null); bool ShippingAddressIsNotSameAsBillingAddress = (basketHelper.ShippingAddressID != basketHelper.BillingAddressID); bool NewShippingAddressIsNotBillingAddress = (newShippingAddress.OrderAddressId != basketHelper.BillingAddressID); if (NewShippingAddressIsNotOldShippingAddress && ShippingAddressHasBeenPreviouslySet && ShippingAddressIsNotSameAsBillingAddress) { basketHelper.Basket.Addresses.Remove(oldShippingAddress); } if (NewShippingAddressIsNotOldShippingAddress && NewShippingAddressIsNotBillingAddress) { basketHelper.Basket.Addresses.Add(newShippingAddress); } basketHelper.ShippingAddressID = newShippingAddress.OrderAddressId; basketHelper.Basket.Save(); } My initial thought was that if I could pass a class's property by refernce then I could rewrite the previous functions into something like private void SetPurchaseOrderAddress(OrderAddress newAddress, ref String CurrentChangingAddressIDProperty) and then call this function and pass in either basketHelper.BillingAddressID or basketHelper.ShippingAddressID as CurrentChangingAddressIDProperty but since I can't pass C# properties by reference I am not sure what to do with this code to be able to reuse the logic in both places. Thanks for any insight you can give me.

    Read the article

  • C# testing framework that works like JUnit in Eclipse?

    - by bluebomber357
    Hello all, I come from a Java/Eclipse background and I fear that I am spoiled by how easy it is to get JUnit and JMock running in Eclipse, and have that GUI with the bar and pass/fail information pop up. It just works with no hassle. I see a lot of great options for testing in C# with Visual Studio. NUnit looks really nice because it contains unit and mock testing all in one. The trouble is, I can't figure out how to get the IDE display my results. The NUnit documentation seems to show that it doesn't automatically show results through the VS IDE. I found http://testdriven.net/, which seems to trumpet that is makes VS display these stats and work with multiple frameworks, but it isn't open source. Is there anyway to get unit and mock testing working with the VS IDE like it does in Java with Eclipse?

    Read the article

  • is right to implement a business logic in the type binding DI framwork?

    - by Martino
    public IRedirect FactoryStrategyRedirect() { if (_PasswordExpired) { return _UpdatePasswordRedirectorFactory.Create(); } else { return _DefaultRedirectorFactory.Create(); } } This strategy factory method can be replaced with type binding and when clause: Bind<IRedirect>.To<UpdatePasswordRedirector>.When(c=> c.kernel.get<SomeContext>().PasswordExpired()) Bind<IRedirect>.To<DefaultRedirector>.When(c=> not c.kernel.get<SomeContext>().PasswordExpired()) I wonder which of the two approaches is the more correct. What are the pros and cons. Especially in the case in which the logic is more complex with more variables to test and more concrete classes to return. is right to implement a business logic in the binding?

    Read the article

  • How to reuse backup on Time Machine on Snow Leopard after a logic board change, after choosing wrong

    - by kmiffy
    After my logic board was replaced, I connected my laptop back to my network, and Time Machine gave me a popup, as shown on this thread: http://superuser.com/questions/78068/recycle-time-machine-for-new-machine/78264#78264 I misread the question and clicked on "Create New Backup" when I should have clicked on "Reuse Backup" to connect to my old backup file. How can I trigger that popup again? Turning Time Machine on and off does not work, and the instructions on forums to fix via terminal doesn't work because snow leopard is missing the fsaclctl command (and I'm also not familiar with terminal commands.) Thanks.

    Read the article

  • Big Visible Charts

    - by Robert May
    An important part of Agile is the concept of transparency and visibility. In proper functioning teams, stakeholders can look at any team at any time in the iteration or release and see how that team is doing by simply looking at what we call Big Visible Charts. If you’ve done Scrum, you’ve seen these charts. However, interpreting these charts can often be an art form. There are several different charts that can be useful. In this newsletter, I’ll focus on the Iteration Burndown and Cumulative Flow charts. I’ve included a copy of the spreadsheet that I used to create the charts, and if you don’t have a tool that creates them for you, you can use this spreadsheet to do so. Our preferred tool for managing Scrum projects is Rally. Rally creates all of these charts for you, saving you quite a bit of time. The Iteration Burndown and Cumulative Flow Charts This is the main chart that teams use. Although less useful to stakeholders, this chart is critical to the team and provides quite a bit of information to the team about how their iteration is going. Most charts are a combination of the charts below, so you may need to combine aspects of each section to understand what is happening in your iterations. Ideal Ah, isn’t that a pretty picture? Unfortunately, it’s also very unrealistic. I’ve seen iterations that come close to ideal, but never that match perfectly. If your iteration matches perfectly, chances are, someone is playing with the numbers. Reality is just too difficult to have a burndown chart that matches this exactly. Late Planning Iteration started, but the team didn’t. You can tell this by the fact that the real number of estimated hours didn’t appear until day two. In the cumulative flow, you can also see that nothing was defined in Day one and two. You want to avoid situations like this. You’ll note that the team had to burn faster than is ideal to meet the iteration because of the late planning. This often results in long weeks and days. Testing Starved Determining whether or not testing is starved is difficult without the cumulative flow. The pattern in the burndown could be nothing more that developers not completing stories early enough or could be caused by stories being too big. With the cumulative flow, however, you see that only small bites are in progress and stories were completed early, but testing didn’t start testing until the end of the iteration, and didn’t complete testing all stories in the iteration. When this happens, question whether or not your testing resources are sufficient for your team and whether or not acceptance is adequately defined. No Testing With this one, both graphs show the same thing; the team needs testers and testing! Without testing, what was completed cannot be verified to make sure that it is acceptable to the business. If you find yourself in this situation, review your testing practices and acceptance testing process and make changes today. Late Development With this situation, both graphs tell a story. In the top graph, you can see that the hours failed to burn down as quickly as the team expected. This could be caused by the team not correctly estimating their hours or the team could have had illness or some other issue that affected them. Often, when teams are tackling something that is more unknown, they’ll run into technical barriers that cause the burn down to happen slower than expected. In the cumulative flow graph, you can see that not much was completed in the first few days. This could be because of illness or technical barriers or simply poor estimation. Testing was able to keep up with everything that was completed, however. No Tool Updating When you see graphs that look like this, you can be assured that it’s because the team is not updating the tool that generates the graphs. Review your policy for when they are to update. On the teams that I run, I require that each team member updates the tool at least once daily. You should also check to see how well the team is breaking down stories into tasks. If they’re creating few large tasks, graphs can look similar to this. As a general rule, I never allow tasks, other than Unit Testing and Uncertainty, to be greater than eight hours in duration. Scope Increase I always encourage team members to enter in however much time they think they have left on a task, even if that means increasing the total amount of time left to do. You get a much better and more realistic picture this way. Increasing time remaining could explain the burndown graph, but by looking at the cumulative flow graph, we can see that stories were added to the iteration and scope was increased. Since planning should consume all of the hours in the iteration, this is almost always a bad thing. If the scope change happened late in the iteration and the hours remaining were well below the ideal burn, then increasing scope is probably o.k., but estimation needs to get better. However, with the charts above, that’s clearly not what happened and the team was required to do extra work to make the iteration. If you find this happening, your product owner and ScrumMasters need training. The team also needs to learn to say no. Scope Decrease Scope decreases are just as bad as scope increases. Usually, graphs above show that the team did a poor job of estimating their stories and part way through had to reduce scope to change the iteration. This will happen once in a while, but if you find it’s a pattern on your team, you need to re-evaluate planning. Some teams are hopelessly optimistic. In those cases, I’ll introduce a task I call “Uncertainty.” With Uncertainty, the team estimates how many hours they might need if things don’t go well with the tasks they’ve defined. They try to estimate things that could go poorly and increase the time appropriately. Having an Uncertainty task allows them to have a low and high estimate. Uncertainty should not just be an arbitrary buffer. It must correlate to real uncertainty in the tasks that have been defined. Stories are too Big Often, we see graphs like the ones above. Note that the burndown looks fairly good, other than the chunky acceptance of stories. However, when you look at cumulative flow, you can see that at one point, everything is in progress. This is a bad thing. When you see graphs like this, you’re in one of two states. You may just have a very small team and can only handle one or two stories in your iteration. If you have more than one or two people, then the most likely problem is that your stories are far too big. To combat this, break large high hour stories into smaller pieces that can be completed independently and accepted independently. If you don’t, you’ll likely be requiring your testers to do heroic things to complete testing on the last day of the iteration and you’re much more likely to have the entire iteration fail, because of the limited amount of things that can be completed. Summary There are other charts that can be useful when doing scrum. If you don’t have any big visible charts, you really need to evaluate your process and change. These charts can provide the team a wealth of information and help you write better software. If you have any questions about charts that you’re seeing on your team, contact me with a screen capture of the charts and I’ll tell you what I’m seeing in those charts. I always want this information to be useful, so please let me know if you have other questions. Technorati Tags: Agile

    Read the article

  • Mysql optimization question - How to apply AND logic in search and limit on results in one query?

    - by sandeepan-nath
    This is a little long but I have provided all the database structures and queries so that you can run it immediately and help me. Run the following queries:- CREATE TABLE IF NOT EXISTS `Tutor_Details` ( `id_tutor` int(10) NOT NULL auto_increment, `firstname` varchar(100) NOT NULL default '', `surname` varchar(155) NOT NULL default '', PRIMARY KEY (`id_tutor`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=41 ; INSERT INTO `Tutor_Details` (`id_tutor`,`firstname`, `surname`) VALUES (1, 'Sandeepan', 'Nath'), (2, 'Bob', 'Cratchit'); CREATE TABLE IF NOT EXISTS `Classes` ( `id_class` int(10) unsigned NOT NULL auto_increment, `id_tutor` int(10) unsigned NOT NULL default '0', `class_name` varchar(255) default NULL, PRIMARY KEY (`id_class`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=229 ; INSERT INTO `Classes` (`id_class`,`class_name`, `id_tutor`) VALUES (1, 'My Class', 1), (2, 'Sandeepan Class', 2); CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=18 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (1, 'Bob'), (6, 'Class'), (2, 'Cratchit'), (4, 'Nath'), (3, 'Sandeepan'), (5, 'My'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`) VALUES (3, 1), (4, 1), (1, 2), (2, 2); CREATE TABLE IF NOT EXISTS `Class_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_class` int(10) default NULL, `id_tutor` int(10) NOT NULL, KEY `Class_Tag_Relations` (`id_tag`), KEY `id_class` (`id_class`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Class_Tag_Relations` (`id_tag`, `id_class`, `id_tutor`) VALUES (5, 1, 1), (6, 1, 1), (3, 2, 2), (6, 2, 2); Following is about the tables:- There are tutors who create classes. Tutor_Details - Stores tutors Classes - Stores classes created by tutors And for searching we are using a tags based approach. All the keywords are stored in tags table (while classes/tutors are created) and tag relations are entered in Tutor_Tag_Relations and Class_Tag_Relations tables (for tutors and classes respectively)like this:- Tags - id_tag tag (this is a a unique field) Tutors_Tag_Relations - Stores tag relations while the tutors are created. Class_Tag_Relations - Stores tag relations while any tutor creates a class In the present data in database, tutor "Sandeepan Nath" has has created class "My Class" and "Bob Cratchit" has created "Sandeepan Class". 3.Requirement The requirement is to return tutor records from Tutor_Details table such that all the search terms (AND logic) are present in the union of these two sets - 1. Tutor_Details table 2. classes created by a tutor in Classes table) Example search and expected results:- Search Term Result "Sandeepan Class" Tutor Sandeepan Nath's record from Tutor Details table "Class" Both the tutors from ... Most importantly, there should be only one mysql query and a LIMIT applicable on the number of results. Following is a working query which I have so far written (It just applies OR logic of search key words instead of the desired AND logic). SELECT td . * FROM Tutor_Details AS td LEFT JOIN Tutors_Tag_Relations AS ttagrels ON td.id_tutor = ttagrels.id_tutor LEFT JOIN Classes AS wc ON td.id_tutor = wc.id_tutor INNER JOIN Class_Tag_Relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN Tags AS t ON t.id_tag = ttagrels.id_tag OR t.id_tag = wtagrels.id_tag WHERE t.tag LIKE '%Sandeepan%' OR t.tag LIKE '%Nath%' GROUP BY td.id_tutor LIMIT 20 Please help me with anything you can. Thanks

    Read the article

  • Facebook Cucumber testing with Authlogic - how test user logged in as facebook user?

    - by rhh
    I'm having trouble implementing this step: Given "I am logged in as a Facebook user" do end The best suggestions I can find on the web (http://opensoul.org/2009/3/6/testing-facebook-with-cucumber) do not seem to be using Authlogic for authentication. Can someone with the Cucumber/Authlogic_facebook_connect/Authlogic combo post their step for testing facebook logins? Thank you very much.

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Geez &ndash; do you even do basic testing?

    - by Shawn Cicoria
    You’d think that a “real” commercial software vendor would at least run a barrage of tests validating updates – ANY updates – before pushing out those updates. Well, McAfee has done it again.  This one, well it just shuts you down…  False positives on a core Windows file. https://kc.mcafee.com/corporate/index?page=content&id=KB68780 http://support.microsoft.com/kb/2025695 Usually, if I get a PC with McAfee offered for “free” usually, I either wipe or uninstall.  That product is the work of the devil.  I can’t understand how these guys are still in business.

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >