Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 33/192 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Curing the Database-Application mismatch

    - by Phil Factor
    If an application requires access to a database, then you have to be able to deploy it so as to be version-compatible with the database, in phase. If you can deploy both together, then the application and database must normally be deployed at the same version in which they, together, passed integration and functional testing.  When a single database supports more than one application, then the problem gets more interesting. I’ll need to be more precise here. It is actually the application-interface definition of the database that needs to be in a compatible ‘version’.  Most databases that get into production have no separate application-interface; in other words they are ‘close-coupled’.  For this vast majority, the whole database is the application-interface, and applications are free to wander through the bowels of the database scot-free.  If you’ve spurned the perceived wisdom of application architects to have a defined application-interface within the database that is based on views and stored procedures, any version-mismatch will be as sensitive as a kitten.  A team that creates an application that makes direct access to base tables in a database will have to put a lot of energy into keeping Database and Application in sync, to say nothing of having to tackle issues such as security and audit. It is not the obvious route to development nirvana. I’ve been in countless tense meetings with application developers who initially bridle instinctively at the apparent restrictions of being ‘banned’ from the base tables or routines of a database.  There is no good technical reason for needing that sort of access that I’ve ever come across.  Everything that the application wants can be delivered via a set of views and procedures, and with far less pain for all concerned: This is the application-interface.  If more than zero developers are creating a database-driven application, then the project will benefit from the loose-coupling that an application interface brings. What is important here is that the database development role is separated from the application development role, even if it is the same developer performing both roles. The idea of an application-interface with a database is as old as I can remember. The big corporate or government databases generally supported several applications, and there was little option. When a new application wanted access to an existing corporate database, the developers, and myself as technical architect, would have to meet with hatchet-faced DBAs and production staff to work out an interface. Sure, they would talk up the effort involved for budgetary reasons, but it was routine work, because it decoupled the database from its supporting applications. We’d be given our own stored procedures. One of them, I still remember, had ninety-two parameters. All database access was encapsulated in one application-module. If you have a stable defined application-interface with the database (Yes, one for each application usually) you need to keep the external definitions of the components of this interface in version control, linked with the application source,  and carefully track and negotiate any changes between database developers and application developers.  Essentially, the application development team owns the interface definition, and the onus is on the Database developers to implement it and maintain it, in conformance.  Internally, the database can then make all sorts of changes and refactoring, as long as source control is maintained.  If the application interface passes all the comprehensive integration and functional tests for the particular version they were designed for, nothing is broken. Your performance-testing can ‘hang’ on the same interface, since databases are judged on the performance of the application, not an ‘internal’ database process. The database developers have responsibility for maintaining the application-interface, but not its definition,  as they refactor the database. This is easily tested on a daily basis since the tests are normally automated. In this setting, the deployment can proceed if the more stable application-interface, rather than the continuously-changing database, passes all tests for the version of the application. Normally, if all goes well, a database with a well-designed application interface can evolve gracefully without changing the external appearance of the interface, and this is confirmed by integration tests that check the interface, and which hopefully don’t need to be altered at all often.  If the application is rapidly changing its ‘domain model’  in the light of an increased understanding of the application domain, then it can change the interface definitions and the database developers need only implement the interface rather than refactor the underlying database.  The test team will also have to redo the functional and integration tests which are, of course ‘written to’ the definition.  The Database developers will find it easier if these tests are done before their re-wiring  job to implement the new interface. If, at the other extreme, an application receives no further development work but survives unchanged, the database can continue to change and develop to keep pace with the requirements of the other applications it supports, and needs only to take care that the application interface is never broken. Testing is easy since your automated scripts to test the interface do not need to change. The database developers will, of course, maintain their own source control for the database, and will be likely to maintain versions for all major releases. However, this will not need to be shared with the applications that the database servers. On the other hand, the definition of the application interfaces should be within the application source. Changes in it have to be subject to change-control procedures, as they will require a chain of tests. Once you allow, instead of an application-interface, an intimate relationship between application and database, we are in the realms of impedance mismatch, over and above the obvious security problems.  Part of this impedance problem is a difference in development practices. Whereas the application has to be regularly built and integrated, this isn’t necessarily the case with the database.  An RDBMS is inherently multi-user and self-integrating. If the developers work together on the database, then a subsequent integration of the database on a staging server doesn’t often bring nasty surprises. A separate database-integration process is only needed if the database is deliberately built in a way that mimics the application development process, but which hampers the normal database-development techniques.  This process is like demanding a official walking with a red flag in front of a motor car.  In order to closely coordinate databases with applications, entire databases have to be ‘versioned’, so that an application version can be matched with a database version to produce a working build without errors.  There is no natural process to ‘version’ databases.  Each development project will have to define a system for maintaining the version level. A curious paradox occurs in development when there is no formal application-interface. When the strains and cracks happen, the extra meetings, bureaucracy, and activity required to maintain accurate deployments looks to IT management like work. They see activity, and it looks good. Work means progress.  Management then smile on the design choices made. In IT, good design work doesn’t necessarily look good, and vice versa.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • The Challenge with HTML5 – In Pictures

    - by dwahlin
    I love working with Web technologies and am looking forward to the new functionality that HTML5 will ultimately bring to the table (some of which can be used today). Having been through the div versus layer battle back in the IE4 and Netscape 4 days I think we’re headed down that road again as a result of browsers implementing features differently. I’ve been spending a lot of time researching and playing around with HTML5 samples and features (mainly because we’re already seeing demand for training on HTML5) and there’s a lot of great stuff there that will truly revolutionize web applications as we know them. However, browsers just aren’t there yet and many people outside of the development world don’t really feel a need to upgrade their browser if it’s working reasonably well (Mom and Dad come to mind) so it’s going to be awhile. There’s a nice test site at http://www.HTML5Test.com that runs through different HTML5 features and scores how well they’re supported. They don’t test for everything and are very clear about that on the site: “The HTML5 test score is only an indication of how well your browser supports the upcoming HTML5 standard and related specifications. It does not try to test all of the new features offered by HTML5, nor does it try to test the functionality of each feature it does detect. Despite these shortcomings we hope that by quantifying the level of support users and web developers will get an idea of how hard the browser manufacturers work on improving their browsers and the web as a development platform. The score is calculated by testing for the many new features of HTML5. Each feature is worth one or more points. Apart from the main HTML5 specification and other specifications created the W3C HTML Working Group, this test also awards points for supporting related drafts and specifications. Some of these specifications were initially part of HTML5, but are now further developed by other W3C working groups. WebGL is also part of this test despite not being developed by the W3C, because it extends the HTML5 canvas element with a 3d context. The test also awards bonus points for supporting audio and video codecs and supporting SVG or MathML embedding in a plain HTML document. These test do not count towards the total score because HTML5 does not specify any required audio or video codec. Also SVG and MathML are not required by HTML5, the specification only specifies rules for how such content should be embedded inside a plain HTML file. Please be aware that the specifications that are being tested are still in development and could change before receiving an official status. In the future new tests will be added for the pieces of the specification that are currently still missing. The maximum number of points that can be scored is 300 at this moment, but this is a moving goalpost.” It looks like their tests haven’t been updated since June, but the numbers are pretty scary as a developer because it means I’m going to have to do a lot of browser sniffing before assuming a particular feature is available to use. Not that much different from what we do today as far as browser sniffing you say? I’d have to disagree since HTML5 takes it to a whole new level. In today’s world we have script libraries such as jQuery (my personal favorite), Prototype, script.aculo.us, YUI Library, MooTools, etc. that handle the heavy lifting for us. Until those libraries handle all of the key HTML5 features available it’s going to be a challenge. Certain features such as Canvas are supported fairly well across most of the major browsers while other features such as audio and video are hit or miss depending upon what codec you want to use. Run the tests yourself to see what passes and what fails for different browsers. You can also view the HTML5 Test Suite Conformance Results at http://test.w3.org/html/tests/reporting/report.htm (a work in progress). The table below lists the scores that the HTML5Test site returned for different browsers I have installed on my desktop PC and laptop. A specific list of tests run and features supported are given when you go to the site. Note that I went ahead and tested the IE9 beta and it didn’t do nearly as good as I expected it would, but it’s not officially out yet so I expect that number will change a lot. Am I opposed to HTML5 as a result of these tests? Of course not - I’m actually really excited about what it offers.  However, I’m trying to be realistic and feel it'll definitely add a new level of headache to the Web application development process having been through something like this many years ago. On the flipside, developers that are able to target a specific browser (typically Intranet apps) or master the cross-browser issues are going to release some pretty sweet applications. Check out http://html5gallery.com/ for a look at some of the more cutting-edge sites out there that use HTML5. Also check out the http://www.beautyoftheweb.com site that Microsoft put together to showcase IE9. Chrome 8 Safari 5 for Windows     Opera 10 Firefox 3.6     Internet Explorer 9 Beta (Note that it’s still beta) Internet Explorer 8

    Read the article

  • What should be tested in Javascript?

    - by Nathan Hoad
    At work, we've just started on a heavily Javascript based application (actually using Coffeescript, but still), of which I've been implementing an automated test system using JsTestDriver and fabric. We've never written something with this much Javascript, so up until now we've never done any Javascript testing. I'm unsure what exactly we should be testing in our unit tests. We've written JQuery plugins for various things, so it's quite obvious that they should be verified for correctness as much as possible with JsTestDriver, but everyone else in my team seems to think that we should be testing the page level Javascript as well. I don't think we should be testing page level Javascript as unit tests, but instead using a system like Selenium to verify everything works as expected. My main reasoning for this is that at the moment, page level Javascript tests are guaranteed to fail through JsTestDriver, because they're trying to access elements on the DOM that can't possibly exist. So, what should be unit tested in Javascript?

    Read the article

  • Unit test and Code Coverage of Ant build scripts

    - by pablaasmo
    In our development environment We have more and more build scripts for ant to perform the build tasks for several different build jobs. These build scripts sometimes become large and do a lot of things and basically is source code in and of itself. So in a "TDD-world" we should have unit tests and coverage reports for the source code. I found AntUnit and BuildFileTest.java for doing unit tests. But it would also be interesting to know the code coverage of those unit tests. I have been searching google, but have not found anything. Does anyone know of a code coverage tool for Ant build scripts?

    Read the article

  • Assign multiple test categories using TestCategoryAttribute

    - by Michael Freidgeim
    I am using TestCategoryAttribute to filter which tests to run during builds and wandered, how to -how to assign multiple test categories.According to constructor documentation only single category can be specified.  However TestCategories Property (plural!)can return multiple categories.Grouping Tests into Test Categories: You can add an automated test to one or multiple test categories using a test attribute. Each test can belong to multiple test categories.The recommended approach from MSDN How to: Group and Run Automated Tests Using Test Categories is to specify multiple TestCategory attributes like the following[TestCategory("Nightly"), TestCategory("Weekly"), TestCategory("ShoppingCart"), TestMethod()]public Void DebitTest() { }Article http://toddmeinershagen.blogspot.com.au/2010/09/create-custom-test-category-attributes.htmlshows how enums can be used instead of strings.It also explains, that TestCategories Property can be used in derived custom attributes.v

    Read the article

  • How does TDD address interaction between objects?

    - by Gigi
    TDD proponents claim that it results in better design and decoupled objects. I can understand that writing tests first enforces the use of things like dependency injection, resulting in loosely coupled objects. However, TDD is based on unit tests - which test individual methods and not the integration between objects. And yet, TDD expects design to evolve from the tests themselves. So how can TDD possibly result in a better design at the integration (i.e. inter-object) level when the granularity it addresses is finer than that (individual methods)?

    Read the article

  • How abstract should you get with BDD

    - by Newton
    I was writing some tests in Gherkin (using Cucumber/Specflow). I was wondering how abstract should I get with my tests. In order to not make this open-ended, which of the following statements is better for BDD: Given I am logged in with email [email protected] and password 12345 When I do something Then something happens as opposed to Given I am logged in as the Administrator When I do something Then something happens The reason I am confused is because 1 is more based on the behaviour (filing in email and password) and 2 is easier to process and write the tests.

    Read the article

  • Online Launch of 3 new Telerik products JustMock, TeamPulse and WebUI Test Studio

    As you probably already know we have introduced 3 new products in the last 10 days, two of them at DevConnections in Las Vegas alone. If you didnt get a chance to attend DevConnections, we have organized an online launch so you get to see our new products first hand. Here is the schedule: Introduction to Telerik JustMock Tuesday, April 20 @11am ET Join the online launch of JustMock - a new developer productivity tool from Telerik designed to make it easy to create unit tests. In this webinar you will find out what is in the current release and learn about JustMocks future. JustMock cuts your development time and helps you create better unit tests without requiring you to change your code. It allows you to perform fast and controlled tests that are independent of external dependencies like databases, web services, or proprietary code. With JustMock, there are ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQLIO Writes

    - by Grant Fritchey
    SQLIO is a fantastic utility for testing the abilities of the disks in your system. It has a very unfortunate name though, since it's not really a SQL Server testing utility at all. It really is a disk utility. They ought to call it DiskIO because they'd get more people using I think. Anyway, branding is not the point of this blog post. Writes are the point of this blog post. SQLIO works by slamming your disk. It performs as mean reads as it can or it performs as many writes as it can depending on how you've configured your tests. There are much smarter people than me who will get into all the various types of tests you should run. I'd suggest reading a bit of what Jonathan Kehayias (blog|twitter) has to say or wade into Denny Cherry's (blog|twitter) work. They're going to do a better job than I can describing all the benefits and mechanisms around using this excellent piece of software. My concerns are very focused. I needed to set up a series of tests to see how well our product SQL Storage Compress worked. I wanted to know the effects it would have on a system, the disk for sure, but also memory and CPU. How to stress the system? SQLIO of course. But when I set it up and ran it, following the documentation that comes with it, I was seeing better than 99% compression on the files. Don't get me wrong. Our product is magnificent, wonderful, all things great and beautiful, gets you coffee in the morning and is made mostly from bacon. But 99% compression. No, it's not that good. So what's up? Well, it's the configuration. The default mechanism is to load up a file, something large that will overwhelm your disk cache. You're instructed to load the file with a character 0x0. I never got a computer science degree. I went to film school. Because of this, I didn't memorize ASCII tables so when I saw this, I thought it was zero's or something. Nope. It's NULL. That's right, you're making a very large file, but you're filling it with NULL values. That's actually ok when all you're testing is the disk sub-system. But, when you want to test a compression and decompression, that can be an issue. I got around this fairly quickly. Instead of generating a file filled with NULL values, I just copied a database file for my tests. And to test it with SQL Storage Compress, I used a database file that had already been run through compression (about 40% compression on that file if you're interested). Now the reads were taken care of. I am seeing very realistic performance from decompressing the information for reads through SQLIO. But what about writes? Well, the issue is, what does SQLIO write? I don't have access to the code. But I do have access to the results. I did two different tests, just to be sure of what I was seeing. First test, use the .DAT file as described in the documentation. I opened the .DAT file after I was done with SQLIO, using WordPad. Guess what? It's a giant file full of air. SQLIO writes NULL values. What does that do to compression? I did the test again on a copy of an uncompressed database file. Then I ran the original and the SQLIO modified copy through ZIP to see what happened. I got better than 99% compression out of the SQLIO modified file (original file of 624,896kb went to 275,871kb compressed, after SQLIO it went to 608kb compressed). So, what does SQLIO write? It writes air. If you're trying to test it with compression or maybe some other type of file storage mechanism like dedupe, you need to know this because your tests really won't be valid. Should I find some other mechanism for testing? Yeah, if all I'm interested in is establishing performance to my own satisfaction, yes. But, I want to be able to compare my results with other people's results and we all need to be using the same tool in order for that to happen. SQLIO is the common mechanism that most people I know use to establish disk performance behavior. It'd be better if we could get SQLIO to do writes in some other fashion. Oh, and before I go, I get to brag a bit. Measuring IOPS, SQL Storage Compress outperforms my disk alone by about 30%.

    Read the article

  • How do you handle measuring Code Coverage in JavaScript

    - by Dancrumb
    In order to measure Code Coverage for JavaScript unit tests, one needs to instrument the code, run the tests and then perform post-processing. My concern is that, as a result, you are unit testing code that will never be run in production. Since JavaScript isn't compiled, what you test should be precisely what you execute. So here's my question, how do you handle this? One thought I had was to run Unit Testing on the production code and use that for my pass fail. I would then create a shadow of my production code, with instrumentation and run my unit tests again; this would give me my code coverage stats. Has anyone come across a method that is a little more graceful than this?

    Read the article

  • How to organize unit/integration test in BDD

    - by whatf
    So finally after reading a lot, I have understood that the difference between BDD and TDD is between T & B. But coming from basic TDD background, what I used to was, first write unittest for database models write test for views (at this point start with integration test as well, along with unittests) write more integration tests for testing UI stuff. What would be a correct way to approach BDD. Say I have a simple blog application. Given : When a user logs in. He should be shown list of all his posts. But for this, I need a model with a row user, another row blog posts. So how do we go about writing tests? when do we create fixtures? When do we write integration (selenium) tests?

    Read the article

  • Where should I store and verify files manipulated by an app

    - by Alan W. Smith
    I'm working on a little Ruby script to move screenshots while renaming them based on a specific convention. I'll be writing tests to confirm the behavior. Ruby has lots of conventions for where to store files (e.g. the "spec" and "features" directories for RSpec and Cucumber, respectively), but I'm not finding best practices for storing files that will be acted upon by the tests. The same goes for a destination for the final copies of the files. So, the question in two parts is: Where should I store files that the test cases will use for a source input. Where should tests that need to write output files send them to.

    Read the article

  • Should SpecFlow be used with BDD as a solo developer?

    - by baens
    I am a long time fan of TDD and after reading the RSpec book, would like to transistion to a BDD process. I like the idea of driving from the outside in, as it is presented in the book. What I am having a hard time getting a handle on is how to structure the tests. I have tried SpecFlow, but it seems cumbersome to use when I am the only one really ever going to be looking at the tests. I like the idea of just using straight NUnit, rather then adding another framework, like it is presented here. Is this a good way to try and structure BDD tests? Is there more information out there on comparing the two ways (that may even be more recent)?

    Read the article

  • What would one call this architecture?

    - by Chris
    I have developed a distributed test automation system which consists of two different entities. One entity is responsible for triggering tests runs and monitoring/displaying their progress. The other is responsible for carrying out tests on that host. Both of these entities retrieve data from a central DB. Now, my first thought is that this is clearly a server-client architecture. After all, you have exactly one organizing entity and many entities that communicate with said entity. However, while the supposed clients to communicate to the server via RPC, they are not actually requesting services or information, rather they are simply reporting back test progress, in fact, once the test run has been triggered they can complete their tasks without connection to the server. The request for a service is actually made by the supposed server which triggers the clients to carry out tests. So would this still be considered a server-client architecture or is this something different?

    Read the article

  • DRY, string, and unit testing

    - by Rodrigue
    I have a recurring question when writing unit tests for code that involves constant string values. Let's take an example of a method/function that does some processing and returns a string containing a pre-defined constant. In python, that would be something like: STRING_TEMPLATE = "/some/constant/string/with/%s/that/needs/interpolation/" def process(some_param): # We do some meaningful work that gives us a value result = _some_meaningful_action() return STRING_TEMPLATE % result If I want to unit test process, one of my tests will check the return value. This is where I wonder what the best solution is. In my unit test, I can: apply DRY and use the already defined constant repeat myself and rewrite the entire string def test_foo_should_return_correct_url(): string_result = process() # Applying DRY and using the already defined constant assert STRING_TEMPLATE % "1234" == string_result # Repeating myself, repeating myself assert "/some/constant/string/with/1234/that/needs/interpolation/" == url The advantage I see in the former is that my test will break if I put the wrong string value in my constant. The inconvenient is that I may be rewriting the same string over and over again across different unit tests.

    Read the article

  • What are some concise and comprehensive introductory guide to unit testing for a self-taught programmer [closed]

    - by Superbest
    I don't have much formal training in programming and I have learned most things by looking up solutions on the internet to practical problems I have. There are some areas which I think would be valuable to learn, but which ended up both being difficult to learn and easy to avoid learning for a self-taught programmer. Unit testing is one of them. Specifically, I am interested in tests in and for C#/.NET applications using Microsoft.VisualStudio.TestTools in Visual Studio 2010 and/or 2012, but I really want a good introduction to the principles so language and IDE shouldn't matter much. At this time I'm interested in relatively trivial tests for small or medium sized programs (development time of weeks or months and mostly just myself developing). I don't necessarily intend to do test-driven development (I am aware that some say unit testing alone is supposed to be for developing features in TDD, and not an assurance that there are no bugs in the software, but unit testing is often the only kind of testing for which I have resources). I have found this tutorial which I feel gave me a decent idea of what unit tests and TDD looks like, but in trying to apply these ideas to my own projects, I often get confused by questions I can't answer and don't know how to answer, such as: What parts of my application and what sorts of things aren't necessarily worth testing? How fine grained should my tests be? Should they test every method and property separately, or work with a larger scope? What is a good naming convention for test methods? (since apparently the name of the method is the only way I will be able to tell from a glance at the test results table what works in my program and what doesn't) Is it bad to have many asserts in one test method? Since apparently VS2012 reports only that "an Assert.IsTrue failed within method MyTestMethod", and if MyTestMethod has 10 Assert.IsTrue statements, it will be irritating to figure out why a test is failing. If a lot of the functionality deals with writing and reading data to/from the disk in a not-exactly trivial fashion, how do I test that? If I provide a bunch of files as input by placing them in the program's directory, do I have to copy those files to the test project's bin/Debug folder now? If my program works with a large body of data and execution takes minutes or more, should my tests have it do the whole use all of the real data, a subset of it, or simulated data? If latter, how do I decide on the subset or how to simulate? Closely related to the previous point, if a class is such that its main operation happens in a state that is arrived to by the program after some involved operations (say, a class makes calculations on data derived from a few thousands of lines of code analyzing some raw data) how do I test just that class without inevitably ending up testing that class and all the other code that brings it to that state along with it? In general, what kind of approach should I use for test initialization? (hopefully that is the correct term, I mean preparing classes for testing by filling them in with appropriate data) How do I deal with private members? Do I just suck it up and assume that "not public = shouldn't be tested"? I have seen people suggest using private accessors and reflection, but these feel like clumsy and unsuited for regular use. Are these even good ideas? Is there anything like design patterns concerning testing specifically? I guess the main themes in what I'd like to learn more about are, (1) what are the overarching principles that should be followed (or at least considered) in every testing effort and (2) what are popular rules of thumb for writing tests. For example, at one point I recall hearing from someone that if a method is longer than 200 lines, it should be refactored - not a universally correct rule, but it has been quite helpful since I'd otherwise happily put hundreds of lines in single methods and then wonder why my code is so hard to read. Similarly I've found ReSharpers suggestions on member naming style and other things to be quite helpful in keeping my codebases sane. I see many resources both online and in print that talk about testing in the context of large applications (years of work, 10s of people or more). However, because I've never worked on such large projects, this context is very unfamiliar to me and makes the material difficult to follow and relate to my real world problems. Speaking of software development in general, advice given with the assumptions of large projects isn't always straightforward to apply to my own, smaller endeavors. Summary So my question is: What are some resources to learn about unit testing, for a hobbyist, self-taught programmer without much formal training? Ideally, I'm looking for a short and simple "bible of unit testing" which I can commit to memory, and then apply systematically by repeatedly asking myself "is this test following the bible of testing closely enough?" and then amending discrepancies if it doesn't.

    Read the article

  • Moving from mock to real objects?

    - by jjchiw
    I'm like doing TDD so I started everything mocking objects, creating interface, stubbing, great. The design seems to work, now I'll implement the stuff, a lot of the code used in the stubs are going to be reused in my real implementation yay! Now should I duplicate the tests to use the real object implementation (but keeping the mocks object of the sensitive stuff like Database and "services" that are out of my context (http calls, etc...)) Or just change the mocks and stubs of the actual tests to use the real objects....... So the question is that, keep two tests or replace the stubs, mocks? And after that, I should keep designing with the mocks, stubs or just go with real objects? (Just making myself clear I'll keep the mock object of the sensitive stuff like database and services that are out of my context, in both situations.)

    Read the article

  • What would one call this architecture?

    - by Chris
    I have developed a distributed test automation system which consists of two different entities. One entity is responsible for triggering tests runs and monitoring/displaying their progress. The other is responsible for carrying out tests on that host. Both of these entities retrieve data from a central DB. Now, my first thought is that this is clearly a server-client architecture. After all, you have exactly one organizing entity and many entities that communicate with said entity. However, while the supposed clients to communicate to the server via RPC, they are not actually requesting services or information, rather they are simply reporting back test progress, in fact, once the test run has been triggered they can complete their tasks without connection to the server. The request for a service is actually made by the supposed server which triggers the clients to carry out tests. So would this still be considered a server-client architecture or is this something different?

    Read the article

  • Documentation and Test Assertions in Databases

    - by Phil Factor
    When I first worked with Sybase/SQL Server, we thought our databases were impressively large but they were, by today’s standards, pathetically small. We had one script to build the whole database. Every script I ever read was richly annotated; it was more like reading a document. Every table had a comment block, and every line would be commented too. At the end of each routine (e.g. procedure) was a quick integration test, or series of test assertions, to check that nothing in the build was broken. We simply ran the build script, stored in the Version Control System, and it pulled everything together in a logical sequence that not only created the database objects but pulled in the static data. This worked fine at the scale we had. The advantage was that one could, by reading the source code, reach a rapid understanding of how the database worked and how one could interface with it. The problem was that it was a system that meant that only one developer at the time could work on the database. It was very easy for a developer to execute accidentally the entire build script rather than the selected section on which he or she was working, thereby cleansing the database of everyone else’s work-in-progress and data. It soon became the fashion to work at the object level, so that programmers could check out individual views, tables, functions, constraints and rules and work on them independently. It was then that I noticed the trend to generate the source for the VCS retrospectively from the development server. Tables were worst affected. You can, of course, add or delete a table’s columns and constraints retrospectively, which means that the existing source no longer represents the current object. If, after your development work, you generate the source from the live table, then you get no block or line comments, and the source script is sprinkled with silly square-brackets and other confetti, thereby rendering it visually indigestible. Routines, too, were affected. In our system, every routine had a directly attached string of unit-tests. A retro-generated routine has no unit-tests or test assertions. Yes, one can still commit our test code to the VCS but it’s a separate module and teams end up running the whole suite of tests for every individual change, rather than just the tests for that routine, which doesn’t scale for database testing. With Extended properties, one can get the best of both worlds, and even use them to put blame, praise or annotations into your VCS. It requires a lot of work, though, particularly the script to generate the table. The problem is that there are no conventional names beyond ‘MS_Description’ for the special use of extended properties. This makes it difficult to do splendid things such ensuring the integrity of the build by running a suite of tests that are actually stored in extended properties within the database and therefore the VCS. We have lost the readability of database source code over the years, and largely jettisoned the use of test assertions as part of the database build. This is not unexpected in view of the increasing complexity of the structure of databases and number of programmers working on them. There must, surely, be a way of getting them back, but I sometimes wonder if I’m one of very few who miss them.

    Read the article

  • Representing Mauritius in the 2013 Bench Games

    Only by chance I came across an interesting option for professionals and enthusiasts in IT, and quite honestly I can't even remember where I caught attention of Brainbench and their 2013 Bench Games event. But having access to 600+ free exams in a friendly international intellectual competition doesn't happen to be available every day. So, it was actually a no-brainer to sign up and browse through the various categories. Most interestingly, Brainbench is not only IT-related. They offer a vast variety of fields in their Test Center, like Languages and Communication, Office Skills, Management, Aptitude, etc., and it can be a little bit messy about how things are organised. Anyway, while browsing through their test offers I added a couple of exams to 'My Plan' which I would give a shot afterwards. Self-assessments Actually, I took the tests based on two major aspects: 'Fun Factor' and 'How good would I be in general'... Usually, you have to pay for any kind of exams and given this unique chance by Brainbench to simply train this kind of tests was already worth the time. Frankly speaking, the tests are very close to the ones you would be asked to do at Prometric or Pearson Vue, ie. Microsoft exams, etc. Go through a set of multiple choice questions in a given time frame. Most of the tests I did during the Bench Games were based on 40 questions, each with a maximum of 3 minutes to answer. Ergo, one test in maximum 2 hours - that sounds feasible, doesn't it? The Measure of Achievement While the 2013 Bench Games are considered a worldwide friendly competition of knowledge I was really eager to get other Mauritians attracted. Using various social media networks and community activities it all looked quite well at the beginning. Mauritius was listed on rank #19 of Most Certified Citizens and rank #10 of Most Master Level Certified Nation - not bad, not bad... Until... the next update of the Bench Games Leaderboard. The downwards trend seemed to be unstoppable and I couldn't understand why my results didn't show up on the Individual Leader Board. First of all, I passed exams that were not even listed and second, I had better results on some exams listed. After some further information from the organiser it turned out that my test transcript wasn't available to the public. Only then results are considered and counted in the competition. During that time, I actually managed to hold 3 test results on the Individuals... Other participants were merciless, eh, more successful than me, produced better test results than I did. But still I managed to stay on the final score board: An 'exotic' combination of exam, test result, country and person itself Representing Mauritius and the Visual FoxPro community in that fun event. And although I mainly develop in Visual FoxPro 9.0 SP2 and C# using .NET Framework from 2.0 to 4.5 since a couple of years I still managed to pass on Master Level. Hm, actually my Microsoft Certified Programmer (MCP) exams are dated back in June 2004 - more than 9 years ago... Look who got lucky... As described above I did a couple of exams as time allowed and without any preparations, but still I received the following mail notification: "Thank you for recently participating in our Bench Games event.  We wanted to inform you that you obtained a top score on our test(s) during this event, and as a result, will receive a free annual Brainbench subscription.  Your annual subscription will give you access to all our tests just like Bench Games, but for an entire year plus additional benefits!" -- Leader Board Notification from Brainbench Even fun activities get rewarded sometimes. Thanks to @Brainbench_com for the free annual subscription based on my passed 2013 Bench Games Master Level exam. It would be interesting to know about the total figures, especially to see how many citizens of Mauritius took part in this year's Bench Games. Anyway, I'm looking forward to be able to participate in other challenges like this in the future.

    Read the article

  • Having problems building OpenCV 2.0 on CentOS 5?

    - by Hayri Ugur KOLTUK
    Hi all! I'd been trying to install OpenCV library to my centos system however when i type make and hit enter after configuring with cmake, i get the following error: [100%] Building CXX object tests/cv/CMakeFiles/cvtest.dir/src/amoments.o [100%] Building CXX object tests/cv/CMakeFiles/cvtest.dir/src/affine3d_estimator.o [100%] Building CXX object tests/cv/CMakeFiles/cvtest.dir/src/acontours.o [100%] Building CXX object tests/cv/CMakeFiles/cvtest.dir/src/areprojectImageTo3D.o Linking CXX executable ../../bin/cvtest CMakeFiles/cvtest.dir/src/highguitest.o: In function CV_HighGuiTest::run(int)': highguitest.cpp:(.text._ZN14CV_HighGuiTest3runEi+0x15): warning: the use oftmpnam' is dangerous, better use `mkstemp' [100%] Built target cvtest make: * [all] Error 2 and interesting, once i got this error: [ 99%] Built target mltest [ 99%] Generating generated0.i Traceback (most recent call last): File "/home/proje/OpenCV-2.1.0/interfaces/python/gen.py", line 43, in ? if True in has_init and not all(has_init[has_init.index(True):]): NameError: name 'all' is not defined make[2]: * [interfaces/python/generated0.i] Error 1 make[1]: [interfaces/python/CMakeFiles/cvpy.dir/all] Error 2 make: ** [all] Error 2 What possibly is the cause of these errors? I need to install opencv immediately on this computer. Best regards, Hayri Ugur KOLTUK

    Read the article

  • Cannot integrate Gallio MBUnit with Team City

    - by Bernard Larouche
    I have been trying to get my MBUnit tests suite to work on Team City for many days now without any success. My solution builds no problem. The program is with my tests. After googling for Gallio integration with Team City I tried many ways to make this thing work and I think I am close but need help. I have included the gallio bin directory to my repository and also on my TC Server. Here is my build runner set up in Team City : Build runner : MSBuild Build file path : Myproject.msbuild Targets : RebuildSolution RunTests Here is Myproject.msbuild file I created and included in the Source control trunk directory : <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- This is needed by MSBuild to locate the Gallio task --> <UsingTask AssemblyFile="C:\Gallio\bin\Gallio.MSBuildTasks.dll" TaskName="Gallio" /> <!-- Specify the tests assemblies --> <ItemGroup> <TestAssemblies Include="C:\_CBL\CBL\CoderForTraders\Source\trunk\UnitTest\DomainModel.Tests\bin\Debug\CBL.CoderForTraders.DomainModel.Tests.dll" /> </ItemGroup> <Target Name="RunTests"> <Gallio IgnoreFailures="false" Assemblies="@(TestAssemblies)" RunnerExtensions="TeamCityExtension,Gallio.TeamCityIntegration"> <!-- This tells MSBuild to store the output value of the task's ExitCode property into the project's ExitCode property --> <Output TaskParameter="ExitCode" PropertyName="ExitCode"/> </Gallio> <Error Text="Tests execution failed" Condition="'$(ExitCode)' != 0" /> </Target> <Target Name="RebuildSolution"> <Message Text="Starting to Build"/> <MSBuild Projects="CoderForTraders.sln" Properties="Configuration=Debug" Targets="Rebuild" /> </Target> </Project> Here are the errors displayed by Team City : error MSB4064: The "Assemblies" parameter is not supported by the "Gallio" task. Verify the parameter exists on the task, and it is a settable public instance property error MSB4063: The "Gallio" task could not be initialized with its input parameters. Thanks for your help

    Read the article

  • Problem's running unittest test suite OO

    - by chrissygormley
    Hello, I have a test suite to perform smoke tests. I have all my script stored in various classes but when I try and run the test suite I can't seem to get it working if it is in a class. The code is below: (a class to call the tests) from alltests import SmokeTests class CallTests(SmokeTests): def integration(self): self.suite() if __name__ == '__main__': run = CallTests() run.integration() And the test suite: class SmokeTests(): def suite(self): #Function stores all the modules to be tested modules_to_test = ('external_sanity', 'internal_sanity') alltests = unittest.TestSuite() for module in map(__import__, modules_to_test): alltests.addTest(unittest.findTestCases(module)) return alltests unittest.main(defaultTest='suite') This output's an error: Attribute Error: 'module' object has no attribute 'suite' So I can see how to call a normal function defined but I'm finding it difficult calling in the suite. In one of the tests the suite is set up like so: class InternalSanityTestSuite(unittest.TestSuite): # Tests to be tested by test suite def makeInternalSanityTestSuite(): suite = unittest.TestSuite() suite.addTest(TestInternalSanity("BasicInternalSanity")) suite.addTest(TestInternalSanity("VerifyInternalSanityTestFail")) return suite def suite(): return unittest.makeSuite(TestInternalSanity) Can anyone help me with getting this running? Thanks for any help in advance.

    Read the article

  • TDD vs. Unit testing

    - by Walter
    My company is fairly new to unit testing our code. I've been reading about TDD and unit testing for some time and am convinced of their value. I've attempted to convince our team that TDD is worth the effort of learning and changing our mindsets on how we program but it is a struggle. Which brings me to my question(s). There are many in the TDD community who are very religious about writing the test and then the code (and I'm with them), but for a team that is struggling with TDD does a compromise still bring added benefits? I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests. What's the best way to bring a struggling team into TDD? And failing that is it still worth writing unit tests even if it is after the code is written? EDIT What I've taken away from this is that it is important for us to start unit testing, somewhere in the coding process. For those in the team who pickup the concept, start to move more towards TDD and testing first. Thanks for everyone's input. FOLLOW UP We recently started a new small project and a small portion of the team used TDD, the rest wrote unit tests after the code. After we wrapped up the coding portion of the project, those writing unit tests after the code were surprised to see the TDD coders already done and with more solid code. It was a good way to win over the skeptics. We still have a lot of growing pains ahead, but the battle of wills appears to be over. Thanks for everyone who offered advice!

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >