Search Results

Search found 10055 results on 403 pages for 'penetration testing'.

Page 10/403 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • What Are Some Tips For Writing A Large Number of Unit Tests?

    - by joshin4colours
    I've recently been tasked with testing some COM objects of the desktop app I work on. What this means in practice is writing a large number (100) unit tests to test different but related methods and objects. While the unit tests themselves are fairly straight forward (usually one or two Assert()-type checks per test), I'm struggling to figure out the best way to write these tests in a coherent, organized manner. What I have found is that copy and Paste coding should be avoided. It creates more problems than it's worth, and it's even worse than copy-and-paste code in production code because test code has to be more frequently updated and modified. I'm leaning toward trying an OO-approach using but again, the sheer number makes even this approach daunting from an organizational standpoint due to concern with maintenance. It also doesn't help that the tests are currently written in C++, which adds some complexity with memory management issues. Any thoughts or suggestions?

    Read the article

  • If you should only have one assertion per test; how to test multiple inputs?

    - by speg
    I'm trying to build up some test cases, and have read that you should try and limit the number of assertions per test case. So my question is, what is the best way to go about testing a function w/ multiple inputs. For example, I have a function that parses a string from the user and returns the number of minutes. The string can be in the form "5w6h2d1m", where w, h, d, m correspond to the number of weeks, hours, days, and minutes. If I wanted to follow the '1 assertion per test rule' I'd have to make multiple tests for each variation of input? That seems silly so instead I just have something like: self.assertEqual(parse_date('5m'), 5) self.assertEqual(parse_date('5h'), 300) self.assertEqual(parse_date('5d') ,7200) self.assertEqual(parse_date('1d4h20m'), 1700) In the one test case. Is there a better way?

    Read the article

  • How to write automated tests for SQL queries?

    - by James
    The current system we are adopting at work is to write some extremely complex queries which perform multiple calculations and have multiple joins / sub-queries. I don't think I am experienced enough to say if this is correct or not so I am agreeing and attempting to function with this system as it has clear benefits. The problem we are having at the moment is that the person writing the queries makes a lot of mistakes and assumes everything is correct. We have now assigned a tester to analyse all of the queries but this still proves extremely time consuming and stressful. I would like to know how we could create an automated procedure (without specifically writing it with code if possible as I can work out how to do that the long way) to verify a set of 10+ different inputs, verify the output data and say if the calculations are correct. I know I could write a script using specific data in the database and create a script using c# (the db is SQL Server) and verify all the values coming out but I would like to know what the official "standard" is as my experience is lacking in this area and I would like to improve. I am happy to add more information if required, add a comment if necessary. Thank you. Edit: I am using c#

    Read the article

  • What does well written, readable tests look like?

    - by Industrial
    Doing unit testing for the first time at a large scale, I find myself writing a lot of repetitive unit tests for my business logic. Sure, to create complete test suites I need to test all possibilities but readability feels compromised doing what I do - as shown in the psuedocode below. How would a well written, readable test suit look like? describe "UserEntity" -> it "valid name validates" ... it "invalid name doesnt validate" ... it "valid list of followers validate" ..

    Read the article

  • SQL University: What and why of database testing

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 2 – Tools of the trade With that out of the way let us sharpen our pencils and get going. Why test a database The sad state of the industry today is that there is very little emphasis on testing in general. Test driven development is still a small niche of the programming world while refactoring is even smaller. The cause of this is the inability of developers to convince themselves and their managers that writing tests is beneficial. At the moment they are mostly viewed as waste of time. This is because the average person (let’s not fool ourselves, we’re all average) is unable to think about lower future costs in relation to little more current work. It’s orders of magnitude easier to know about the current costs in relation to current amount of work. That’s why programmers convince themselves testing is a waste of time. However we have to ask ourselves what tests are really about? Maybe finding bugs? No, not really. If we introduce bugs, we’re likely to write test around those bugs too. But yes we can find some bugs with tests. The main point of tests is to have reproducible repeatability in our systems. By having a code base largely covered by tests we can know with better certainty what a small code change can break in other parts of the system. By having repeatability we can make code changes with confidence, since we know we’ll see what breaks in other tests. And here comes the inability to estimate future costs. By spending just a few more hours writing those tests we’d know instantly what broke where. Imagine we fix a reported bug. We check-in the code, deploy it and the users are happy. Until we get a call 2 weeks later about a certain monthly process has stopped working. What we don’t know is that this process was developed by a long gone coworker and for some reason it relied on that same bug we’ve happily fixed. There’s no way we could’ve known that. We say OK and go in and fix the monthly process. But what we have no clue about is that there’s this ETL job that relied on data from that monthly process. Now that we’ve fixed the process it’s giving unexpected (yet correct since we fixed it) data to the ETL job. So we have to fix that too. But there’s this part of the app we coded that relies on data from that exact ETL job. And just like that we enter the “Loop of maintenance horror”. With the loop eventually comes blame. Here’s a nice tip for all developers and DBAs out there: If you make a mistake man up and admit to it. All of the above is valid for any kind of software development. Keeping this in mind the database is nothing other than just a part of the application. But a big part! One reason why testing a database is even more important than testing an application is that one database is usually accessed from multiple applications and processes. This makes it the central and vital part of the enterprise software infrastructure. Knowing all this can we really afford not to have tests? What to test in a database Now that we’ve decided we’ll dive into this testing thing we have to ask ourselves what needs to be tested? The short answer is: everything. The long answer is: read on! There are 2 main ways of doing tests: Black box and White box testing. Black box testing means we have no idea how the system internals are built and we only have access to it’s inputs and outputs. With it we test that the internal changes to the system haven’t caused the input/output behavior of the system to change. The most important thing to test here are the edge conditions. It’s where most programs break. Having good edge condition tests we can be more confident that the systems changes won’t break. White box testing has the full knowledge of the system internals. With it we test the internal system changes, different states of the application, etc… White and Black box tests should be complementary to each other as they are very much interconnected. Testing database routines includes testing stored procedures, views, user defined functions and anything you use to access the data with. Database routines are your input/output interface to the database system. They count as black box testing. We test then for 2 things: Data and schema. When testing schema we only care about the columns and the data types they’re returning. After all the schema is the contract to the out side systems. If it changes we usually have to change the applications accessing it. One helpful T-SQL command when doing schema tests is SET FMTONLY ON. It tells the SQL Server to return only empty results sets. This speeds up tests because it doesn’t return any data to the client. After we’ve validated the schema we have to test the returned data. There no other way to do this but to have expected data known before the tests executes and comparing that data to the database routine output. Testing Authentication and Authorization helps us validate who has access to the SQL Server box (Authentication) and who has access to certain database objects (Authorization). For desktop applications and windows authentication this works well. But the biggest problem here are web apps. They usually connect to the database as a single user. Please ensure that that user is not SA or an account with admin privileges. That is just bad. Load testing ensures us that our database can handle peak loads. One often overlooked tool for load testing is Microsoft’s OSTRESS tool. It’s part of RML utilities (x86, x64) for SQL Server and can help determine if our database server can handle loads like 100 simultaneous users each doing 10 requests per second. SQL Profiler can also help us here by looking at why certain queries are slow and what to do to fix them.   One particular problem to think about is how to begin testing existing databases. First thing we have to do is to get to know those databases. We can’t test something when we don’t know how it works. To do this we have to talk to the users of the applications accessing the database, run SQL Profiler to see what queries are being run, use existing documentation to decipher all the object relationships, etc… The way to approach this is to choose one part of the database (say a logical grouping of tables that go together) and filter our traces accordingly. Once we’ve done that we move on to the next grouping and so on until we’ve covered the whole database. Then we move on to the next one. Database Testing is a topic that we can spent many hours discussing but let this be a nice intro to the world of database testing. See you in the next post.

    Read the article

  • Regression testing for firewall changes

    - by James C
    We have a number of firewalls in place around our organisation and in some cases packets can pass through four levels of firewall limiting the flow TCP traffic. A concept that I'm used to from software testing is regression testing, allowing you to run a test suite against a changed application to verify that the new changes haven't affected any old features. Does anyone have any experience or an offer any solutions to being able to perform the same type of thing with firewall changes and network testing? The problem becomes a lot more complicated because you'd ideally want to be originating (and testing receipt) of packets across many machines.

    Read the article

  • Verfication vs validation again, does testing belong to verification? If so, which?

    - by user970696
    I have asked before and created a lot of controversy so I tried to collect some data and ask similar question again. E.g. V&V where all testing is only validation: http://www.buzzle.com/editorials/4-5-2005-68117.asp According to ISO 12207, testing is done in validation: •Prepare Test Requirements,Cases and Specifications •Conduct the Tests In verification, it mentiones. The code implements proper event sequence, consistent interfaces, correct data and control flow, completeness, appropriate allocation timing and sizing budgets, and error definition, isolation, and recovery. and The software components and units of each software item have been completely and correctly integrated into the software item Not sure how to verify without testing but it is not there as a technique. From IEEE: Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]. Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] At the end of development phase? That would mean UAT.. So the question is, what testing (unit, integration, system, uat) will be considered verification or validation? I do not understand why some say dynamic verification is testing, while others that only validation. An example: I am testing an application. System requirements say there are two fields with max. lenght of 64 characters and Save button. Use case say: User will fill in first and last name and save. When checking the fields and Save button presence, I would say its verification. When I follow the use case, its validation. So its both together, done on the system as a whole.

    Read the article

  • How to verify the code that could take a substantial time to compile? [on hold]

    - by user18404
    As a follow up to my prev question: What is the best aproach for coding in a slow compilation environment To recap: I am stuck with a large software system with which a TDD ideology of "test often" does not work. And to make it even worse the features like pre-compiled headers/multi-threaded compilation/incremental linking, etc is not available to me - hence I think that the best way out would be to add the extensive logging into the system and to start "coding in large chunks", which I understand as code for a two-three hours first (as opposed to 15-20 mins in TDD) - thoroughly eyeball the code for a 15 minutes and only after all that do the compilation and run the tests. As I have been doing TDD for a quite a while, my code eyeballing / code verification skills got rusty (you don't really need this that much if you can quickly verify what you've done in 5 seconds by running a test or two) - so I am after a recommendations on how to learn these source code verification/error spotting skills again. I know I was able to do that easily some 5-10 years ago when I din't have much support from the compiler/unit testing tools I had until recently, thus there should be a way to get back to the basics.

    Read the article

  • How can I test linkable/executable files that require re-hosting or retargeting?

    - by hagubear
    Due to data protection, I cannot discuss fine details of the work itself so apologies PROBLEM CASE Sometimes my software projects require merging/integration with third party (customer or other suppliers) software. these software are often in linkable executables or object code (requires that my source code is retargeted and linked with it). When I get the executables or object code, I cannot validate its operation fully without integrating it with my system. My initial idea is that executables are not meant to be unit tested, they are meant to be linkable with other system, but what is the guarantee that post-linkage and integration behaviour will be okay? There is also no sufficient documentation available (from the customer) to indicate how to go about integrating the executables or object files. I know this is philosophical question, but apparently not enough research could be found at this moment to conclude to a solution. I was hoping that people could help me go to the right direction by suggesting approaches. To start, I have found out that Avionics OEM software is often rehosted and retargeted by third parties e.g. simulator makers. I wonder how they test them. Surely, the source code will not be supplied due to IPR rgulations. UPDATE I have received reasonable and very useful suggestions regarding this area. My current struggle has shifted into testing 3rd party OBJECT code that needs to be linked with my own source code (retargeted) on my host machine. How can I even test object code? Surely, I need to link them first to even think about doing anything. Is it the post-link behaviour that needs to be determined and scripted (using perl,Tcl, etc.) so that inputs and outputs could be verified? No clue!! :( thanks,

    Read the article

  • White-box testing in Javascript - how to deal with privacy?

    - by Max Shawabkeh
    I'm writing unit tests for a module in a small Javascript application. In order to keep the interface clean, some of the implementation details are closed over by an anonymous function (the usual JS pattern for privacy). However, while testing I need to access/mock/verify the private parts. Most of the tests I've written previously have been in Python, where there are no real private variables (members, identifiers, whatever you want to call them). One simply suggests privacy via a leading underscore for the users, and freely ignores it while testing the code. In statically typed OO languages I suppose one could make private members accessible to tests by converting them to be protected and subclassing the object to be tested. In Javascript, the latter doesn't apply, while the former seems like bad practice. I could always wall back to black box testing and simply check the final results. It's the simplest and cleanest approach, but unfortunately not really detailed enough for my needs. So, is there a standard way of keeping variables private while still retaining some backdoors for testing in Javascript?

    Read the article

  • MS DPM 2007: Testing the Recovery for a Production Domain

    - by NewToDPM
    Hi everybody! MS DPM 2007 is a new technology in my company, and so am I to the product. We have a classic Microsoft domain with two DCs, Exchange 2007 and a couple Web/MS SQL servers. I have deployed DPM one month ago on the domain, and after fixing the various issues I got with the replicas inconsistence and adapting the schedule and retention range to the server storage pool size, I can say the backup system is working correctly (no errors) as of today. However, there is one problem: we did not attempt to restore from the backups yet, which is a big no-no of course. I'm not sure about the way I should handle this, my main concern being Exchange and the System State of the DCs. From my understanding, DPM can only protect AND restore data on a server which is part of the same domain as the backup server. If I restore the System State (containing Active Directory) and the Exchange Storage Groups on a testing server, I am afraid it would completely disturb the domain functioning (for example, having two primary DCs on the domain). I am thinking about building a second DPM server on a testing separate domain which would mirror the replicas and then restore it on testing servers from this new domain. Is it the right way to handle the data recovery testing? How did you do on your domain when you first deployed DPM? I'd be grateful for any link/documentation or advice. Thank you in advance for your help! EDIT: Two options seem possible so far: i. Create another DC/Exchange server in the alternate location; ii. Create a separate domain in the alternate location and setup a trust between this domain and the production one. The option i is certainly the best but implies setting up a secondary Exchange server, with a dedicated public IP address so that if Exchange #1 dies, we can still send emails with Exchange #2. I don't know how complex this can be and would need to discuss it with my colleagues. The option ii would only fit the testing purposes. My only question regarding this is: if my production and DPM servers are part of domain A, and there is a trust between domains A and B, can I restore a domain A content to any domain B server?

    Read the article

  • Silverlight penetration rate

    - by sthg
    Duplicate of: Silverlight Install Base - How big is it? Hi, Anyone knows the penetration rate (in %) for all North-American internet users with the Silverlight plugin installed? Been looking all around, but couldn't find any comprehensive numbers. I'm looking for General Public penetration rates, not only within the dev community. Something close to Adobe's flash version penetration stats would be great.

    Read the article

  • Testing + production server and syncing MySQL data

    - by Matthew
    I have a web application running on LAMP with a testing server and a production server. Is there a standard practice for keeping the data on the testing server in sync with the production server? The data on the testing server gets out of date pretty quick and I feel like there must be an easier way than just dumping the production server and copying it onto the testing server every so often. It's not important that the data is in total sync, just that the testing server represents the production enviornment as accurately as possible.

    Read the article

  • Using a service registry that doesn’t suck part II: Dear registry, do you have to be a message broker?

    - by gsusx
    Continuing our series of posts about service registry patterns that suck, we decided to address one of the most common techniques that Service Oriented (SOA) governance tools use to enforce policies. Scenario Service registries and repositories serve typically as a mechanism for storing service policies that model behaviors such as security, trust, reliable messaging, SLAs, etc. This makes perfect sense given that SOA governance registries were conceived as a mechanism to store and manage the policies...(read more)

    Read the article

  • Agile SOA Governance: SO-Aware and Visual Studio Integration

    - by gsusx
    One of the major limitations of traditional SOA governance platforms is the lack of integration as part of the development process. Tools like HP-Systinet or SOA Software are designed to operate by models on which the architects dictate the governance procedures and policies and the rest of the team members follow along. Consequently, those procedures are frequently rejected by developers and testers given that they can’t incorporate it as part of their daily activities. Having SOA governance products...(read more)

    Read the article

  • We are hiring (take a minute to read this, is not another BS talk ;) )

    - by gsusx
    I really wanted to wait until our new website was out to blog about this but I hope you can put up with the ugly website for a few more days J. Tellago keeps growing and, after a quick break at the beginning of the year, we are back in hiring mode J. We are currently expanding our teams in the United States and Argentina and have various positions open in the following categories. .NET developers: If you are an exceptional .NET programmer with a passion for creating great software solutions working...(read more)

    Read the article

  • Tellago & Tellago Studios at Microsoft TechReady

    - by gsusx
    This week Microsoft is hosting the first edition of their annual TechReady conference. Even though TechReady is an internal conference, Microsoft invited us to present a not one but two sessions about some our recent work. We are particularly proud of the fact that one of those sessions is about our SO-Aware service registry. We see this as a recognition to the growing popularity of SO-Aware as the best Agile SOA governance solution in the Microsoft platform. Well, on Tuesday I had the opportunity...(read more)

    Read the article

  • SO-Aware sessions in Dallas and Houston

    - by gsusx
    Our WCF Registry: SO-Aware keeps being evangelized throughout the world. This week Tellago Studios' Dwight Goins will be speaking at Microsoft events in Dallas and Houston ( https://msevents.microsoft.com/cui/EventDetail.aspx?culture=en-US&EventID=1032469800&IO=ycqB%2bGJQr78fJBMJTye1oA%3d%3d ) about WCF management best practices using SO-Aware . If you are in the area and passionate about WCF you should definitely swing by and give Dwight a hard time ;)...(read more)

    Read the article

  • Do Repeat Yourself in Unit Tests

    - by João Angelo
    Don’t get me wrong I’m a big supporter of the DRY (Don’t Repeat Yourself) Principle except however when it comes to unit tests. Why? Well, in my opinion a unit test should be a self-contained group of actions with the intent to test a very specific piece of code and should not depend on externals shared with other unit tests. In a typical unit test we can divide its code in two major groups: Preparation of preconditions for the code under test; Invocation of the code under test. It’s in the first group that you are tempted to refactor common code in several unit tests into helper methods that can then be called in each one of them. Another way to not duplicate code is to use the built-in infrastructure of some unit test frameworks such as SetUp/TearDown methods that automatically run before and after each unit test. I must admit that in the past I was guilty of both charges but what at first seemed a good idea since I was removing code duplication turnout to offer no added value and even complicate the process when a given test fails. We love unit tests because of their rapid feedback when something goes wrong. However, this feedback requires most of the times reading the code for the failed test. Given this, what do you prefer? To read a single method or wander through several methods like SetUp/TearDown and private common methods. I say it again, do repeat yourself in unit tests. It may feel wrong at first but I bet you won’t regret it later.

    Read the article

  • Why some consider static analysis a testing and some do not?

    - by user970696
    Preparing myself also to ISTQB certification, I found they call static analysis actually as a static testing, while some engineering book distinct between static analysis and testing, which is the dynamic activity. I tent to think that static analysis is not a testing in the true sense as it does not test, it checks/verifies. But sure I would love to hear opinion of the true experts here. Thank you

    Read the article

  • Are there any formalized/mathematical theories of software testing?

    - by Erik Allik
    Googling "software testing theory" only seems to give theories in the soft sense of the word; I have not been able to find anything that would classify as a theory in the mathematical, information theoretical or some other scientific field's sense. What I'm looking for is something that formalizes what testing is, the notions used, what a test case is, the feasibility of testing something, the practicality of testing something, the extent to which something should be tested, formal definition/explanation of code coverage, etc. UPDATE: Also, I'm not sure, intuitively, about the connection between formal verification and what I asked, but there's clearly some sort of connection.

    Read the article

  • Wierd Results A/B Test in Google Website Optimizer

    - by Yisroel
    I set up a test in Google Website Optimizer that has a 3 variations - original (A), B, and C. In order to further validate the results of the test, I added a variation C that is exactly the same as the original. And thats where the results get weird. 6 days in to the test, the best performing variation is C. It outperforms the original by 18.4%! How is that possible? Do I now discount the results of this test entirely?

    Read the article

  • How can I unit test rendering output?

    - by stephelton
    I've been embracing Test-Driven Development (TDD) recently and it's had wonderful impacts on my development output and the resiliency of my codebase. I would like to extend this approach to some of the rendering work that I do in OpenGL, but I've been unable to find any good approaches to this. I'll start with a concrete example so we know what kinds of things I want to test; lets say I want to create a unit cube that rotates about some axis, and that I want to ensure that, for some number of frames, each frame is rendered correctly. How can I create an automated test case for this? Preferably, I'd even be able to write a test case before writing any code to render the cube (per usual TDD practices.) Among many other things, I'd want to make sure that the cube's size, location, and orientation are correct in each rendered frame. I may even want to make sure that the lighting equations in my shaders are correct in each frame. The only remotely useful approach to this that I've come across involves comparing rendered output to a reference output, which generally precludes TDD practice, and is very cumbersome. I could go on about other desired requirements, but I'm afraid the ones I've listed already are out of reach.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >