Search Results

Search found 603 results on 25 pages for 'qa'.

Page 2/25 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Watch the Silverlight 4 Launch event and LIVE QA with ScottGu and others

    Next week on 13-April at 8:00 AM PST Scott Guthrie will deliver a keynote address for the DevConnections conference being held in Las Vegas, NV. Scott will provide updates on the progress made in Silverlight 4 and will provide the details of availability of the developer tools, runtime and other news. Mark your calendars and return to the Silverlight community site to tune into the LIVE event. After the keynote, Channel 9 will be hosting interviews with Scott and other key members of the Silverlight...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • QA with Nokia's Ari Jaaksi: MeeGo Revs Up

    <b>Linux.com:</b> "Nokia's Vice President of MeeGo Devices, Ari Jaaksi, will kick off the afternoon at today's Linux Foundation Collaboration Summit with his keynote at 1:15 p.m. PT. He took a few minutes with us this morning to share what he'll be speaking about and how the MeeGo project is going."

    Read the article

  • Verification as QA - makes sense?

    - by user970696
    Preparing my thesis, I found another interesting discrepancy. While some books say verification it terms of static analysis of work products is quality control (looking for defects), other say it is actually quality assurance because the process of checking is decreasing the probability of real defects when these deliverables will be used for product manufacture. I hesitate as both seems to be correct: it is a way of checking for defects (deviation from requirements, design flaws etc.) so it looks like quality control, but also it is a process which does not have to be done and if done, can yield better quality.

    Read the article

  • Quality Assurance=inspections, reviews..?

    - by user970696
    Studying this subject extensively, the most books state the following: Quality Assurance: prevention activity. Act of inspection, reviewing.. Quality Control: testing While there are some exceptions that mention that QA deals with just processes (planning, strategy, standard application etc.) which is IMHO much closer to real QA, yet I cannot find any good reference in Google Books. I believe that inspections, reviews, testing is all quality control as it is about checking products, no matter if it is the final one or work products. The problem is that so many authors do not agree. I would be grateful for detailed explanation, ideally with a reference.

    Read the article

  • unit testing of installer

    - by Alien01
    What is the best process where code is checkedin by developers, installer is created by build engineer and release to QA to test the installer. Should the installer be release to QA without unit testing by Dev. If dev do some changes then they should wait until QA to report bugs.Or if installer first given to dev for unit testing and once they signoff then only it should be release to QA?

    Read the article

  • How are minimum system requirements determined?

    - by Michael McGowan
    We've all seen countless examples of software that ships with "minimum system requirements" like the following: Windows XP/Vista/7 1GB RAM 200 MB Storage How are these generally determined? Obviously sometimes there are specific constraints (if the program takes 200 MB on disk then that is a hard requirement). Aside from those situations, many times for things like RAM or processor it turns out that more/faster is better with no hard constraint. How are these determined? Do developers just make up numbers that seem reasonable? Does QA go through some rigorous process testing various requirements until they find the lowest settings with acceptable performance? My instinct says it should be the latter but is often the former in practice.

    Read the article

  • How are minimum system requirements determined?

    - by Michael McGowan
    We've all seen countless examples of software that ships with "minimum system requirements" like the following: Windows XP/Vista/7 1GB RAM 200 MB Storage How are these generally determined? Obviously sometimes there are specific constraints (if the program takes 200 MB on disk then that is a hard requirement). Aside from those situations, many times for things like RAM or processor it turns out that more/faster is better with no hard constraint. How are these determined? Do developers just make up numbers that seem reasonable? Does QA go through some rigorous process testing various requirements until they find the lowest settings with acceptable performance? My instinct says it should be the latter but is often the former in practice.

    Read the article

  • General List of Common Programming Errors

    - by javamonkey79
    As one journey's from apprentice to journeyman to master I've noticed that one accumulates a list of best practices for things they've been bitten by. Personally, I write most of my stuff in java & SQL so my list tends to be slated towards them. I've accumulated the following: When doing list removal, always reverse iterate Avoid adding items to a list you are currently iterating on Watch out for NullPointerExceptions Now, I know there are language specific "common errors" links out there like this one. And I'm also aware of the pragmatic programmer tips, Martin Fowler's "code smells". Does anyone know of any good lists out there of things like I've listed above (re: list removal, adding items, etc). My guess is that there are some good QA folks out there that can probably throw me a bone here. I'm not looking for things the compiler can catch - I'm looking for common things that cause bugs. In the event that there isn't a list out there already then I welcome posting your own findings here. Thanks in advance!

    Read the article

  • Agile team with no dedicated Tester members. Insane or efficient?

    - by MetaFight
    I'm a software developer. I've been thinking a lot about the efficiency of the Software Testers I've worked with so far in my career. In fact, I've been thinking a lot about the Software Testers role in general and have reached a potentially contentious conclusion: Non-developer Software Testers staff are less efficient at software testing than developers. Now, before everyone gets upset, hear me out. This isn't mere opinion: Software Testing and Software Development both require a lot of skills in common: Problem solving Thinking about corner cases Analytical skills The ability to define clear and concise step-by-step scenarios What developers have in addition to this is the ability to automate their tests. Yes, I know non-dev testers can automate their tests too, but that often then becomes a test maintenance issue. Because automating UI tests is essentially programming, non-dev members encounter all the same difficulties software developers encounter: Copy-pasta, lack of code reusibility/maintainability, etc. So, I was wondering. Why not replace all non-dev roles with developer roles? Developers have the skills required to perform Software Testing tasks, and they have the skills to automate tests and keep them maintainable. Would the following work: Hire a bunch of developers and split them into 2 roles: Software developers Software developers doing testing (some manual, mostly automated by writing integration tests, unit tests, etc) Software developers doing application support. (I've removed this as it is probably a separate question altogether) And, in our case since we're doing Agile development, rotate the roles every sprint or two. Also, if at all possible, try to have people spend their Developer stints and Testing stints on different projects. Ideally you would want to reduce the turnover rate per rotation. So maybe you could have 2 groups and make sure the rotation cycles of the groups are elided. So, for example, if each rotation was two sprints long, the two groups would have their rotations 1 sprint apart. That way there's only a 50% turn-over rate per sprint. Am I crazy, or could this work? (Obviously a key component to this working is that all devs want to be in the 3 roles. Let's assume I'm starting a new company and I can hire these ideal people) Edit I've removed the phrase "QA", as apparently we are using it incorrectly where I work.

    Read the article

  • Simulating Ajax failures for QA testing

    - by womp
    Our first ASP.Net MVC/jQuery product is about to go to QA, and we're looking for a way for our QA guys to easily be able to simulate bad Ajax requests (without modifying the application code). A typical integration/UI test plan might be: Load page, click button "DoStuff" "DoStuff" fails Attempt button "DoStuff" again "DoStuff" succeeds Verify application state This is a simple test case - there will be cases with multiple failures and successes interspersed. Aside from "unplug your network cable" I'm looking for an easy way for our guys to simulate intermittent bad server responses. I'm open to any ideas so I won't go into too many details about our application setup or dependencies. How have you handled this?

    Read the article

  • Final Integration Testing for Q.A.

    - by CalebHC
    A medium sized rails app that our company has been working on is getting close to the end of development and we are going to start doing Q.A. testing on it. We've have been writing unit, functional and integration tests all along and our test coverage is about 99% (even though that really doesn't mean anything). We feel like we have a pretty good test suite but I was wondering if we should be writing final integration tests for every little action we are going to do during our Q.A. process. If so, would using Shoulda or Cucumber be a good idea? We haven't used either of those testing tools yet, but they sound really great. Any ideas or thoughts would be really helpful. Thanks

    Read the article

  • How can I decide what to test manually, and what to trust to automated tests?

    - by bhazzard
    We have a ton of developers and only a few QA folks. The developers have been getting more involved in qa throughout the development process by writing automated tests, but our QA practices are mostly manual. What I'd love is if our development practices were BDD and TDD and we grew a robust test suite. The question is: While building such a testing suite, how can we decide what we can trust to the tests, and what we should continue testing manually?

    Read the article

  • Web standards or risk avoidance?

    - by Junior Dev
    My company is building an App Engine application. The app encounters a bug (possibly due to an issue with App Engine itself, as per our research) on IE9, but it cannot be reliably reproduced and is experienced by a small percentage of users. The workaround is to force IE9 to use IE8 mode. As a lazy front end developer (who doesn't like CSS hacks, shims and polyfills) I think it's OK to at least try going back to IE9 mode and see what happens, while we're still in private beta. The senior engineer (being more pragmatic) would rather that we continue forcing IE9 users to use the older IE8 mode. Who is right?

    Read the article

  • How much detail is in a good UI regression test?

    - by GlenPeterson
    We use a detailed step-by-step user-interface regression test for our commercial web application. It has a "backbone" test for the most used / most important parts of the system, with optional tests for specific areas of functionality. Using this plan has definitely helped us ensure high quality software. But, having very specific tests can be counter-productive. The tester concentrates on following the test and will completely miss usability issues, or not notice fairly obvious problems such as the bottom part of a page that is missing. By contrast, some of the best UI testing happens when building a demo of a new feature. I often do my own best testing by pretending to demonstrate the system to an imaginary prospect. Yet when I tell the testers, "Just demonstrate the system to yourself" they don't cover nearly as much functionality as they do with a detailed point-by-point test. I'm repeatedly asked to provide more and more detail in the test plan so that a new untrained tester can test with it without asking any questions. Yet details seem to be counter-productive. How much detail do you put in a regression test to make it effective? What techniques make the tester to focus more on the system than on checking off items on the test?

    Read the article

  • Software Tester to Developer [closed]

    - by Mayu Mayooresan
    Possible Duplicate: How do I become a developer? Its not a question related to programming but related to career. Last 2 and half year I've been working as a Software Tester and i'm seriously considering a track change to programmer. but the problems I think of is.. 1. My age (28) 2. My IT experience with Testing 3. Salary wont match if I change the track as I have to start from scrach. Wot do you think guys?? Please advice me. Is it better to change track or stay in Tester job?? I think I dont seem to like tester job. Please advice. Thanks in advance.

    Read the article

  • Forum vs Q&A system

    - by danie7L T
    I would like to know what are the parameters that I have to take into consideration before deciding whether I should incorporate to a website a "Q&A system" or a full forum ? I think forums allow better search capabilities (you can easily dig out old posts) over the "Q&A system", but the latter offer simpler / faster interaction between the users and the site owners. I should add that only a few people (site owners + authorized people) could answer the questions, the user will be on a read-only basis. Anyone can help me decide between the two solutions ? Thank you in advance NB: There is also the impact on the SEOs, are they the same for forums and Q&A systems?

    Read the article

  • Examples and Best Practices for Seeding Defects?

    - by MathAttack
    Defect Seeding seems to be one of the few ways a development organization can tell how thorough an independent testing group is. I'm a fan of using metrics to help counter overconfidence biases, and drive discussions around facts. With that said, I haven't seen Seeding Defects used in practice. Are there best practices above and beyond what McConnell explained? Are there public examples where this has been done? In the absence of the above, any thoughts on why it hasn't been done more? Thanks in advance!

    Read the article

  • Must all new features go through betatest?

    - by LTR
    Obviously, small usability fixes and bugfixes go directly into the stable product. What about small new features? Can you afford to just release them after internal testing, or do they have to be betatested by customers first? Situation: This is a young commercial project, produced by a one-person company. It has an existing userbase and is at it's second major version. Previous betatests have produced some results, however most feedback came from the stable product and not from beta versions.

    Read the article

  • Should the test and the fix be written by different people?

    - by Nutel
    There is a common practice in TDD to write a test before fix to avoid regression and simplify fixing. I just wonder what if the test and fix will be written by different people, total spent time will be almost the same but as now three people will think about possible failures (+tester) we increase probability that fix will cover all possible failure scenarios. Does this practice make sense or it will just waste additional time needed for one more person to familiarize with bug?

    Read the article

  • Is there a stackexchange like extension for magento? [closed]

    - by John K
    I'm looking into adding a knowledge base solution to our existing magento installation where people can ask questions regarding our products in a similar way to the stackexchange format. I could not find anything myself, so I was wondering if anyone had experience with implementing something similar, perhaps trough multiple extensions tied together to achieve this. Any help would be appreciated.

    Read the article

  • Making Separate Assemblies For Different Types Of Tests For The Same Component?

    - by sooprise
    I was told by a few members here that splitting up my unit tests into different assemblies for different components is the best way to structure unit tests. Now, I have a few questions about that idea. What are the advantages of this? Organization, and isolation of errors? Let's say I have a component named "calculator", and I create an assembly for the unit tests on "calculator". Would I create a separate assembly for the integration tests I want to run on "calculator"? Or is the definition of an integration test a test across multiple components, like "calculator" and whatever else, which would require a separate assembly to test both of them together? In that case, would I have one assembly to do all of the integration testing for every component combination?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >