Search Results

Search found 23949 results on 958 pages for 'test me'.

Page 68/958 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Unittest in Django. Static variable feeded into the test case

    - by ziang
    I want to generate some dynamic data and feed these data in to test cases. But I found that Django will initial the test class every time to do the test. So the data will get generated every time django test framework calls the function. Is there anyway to use something like the singleton or static variable to solve the problem? What should be the solution? Thanks!

    Read the article

  • TestDriven.Net 3.0 – All Systems Go

    - by Jamie Cansdale
    I’m pleased to announce that TestDriven.Net 3.0 is now available. Finally! I know many of you will already be using the Beta and RC versions, but if you look at the release notes you’ll see there’s been many refinements since then, so I highly recommend you install the RTM version. Here is a quick summary of a few new features: Visual Studio 2010 supports targeting multiple versions of the .NET framework (multi-targeting). This means you can easily upgrade your Visual Studio 2005/2008 solutions without necessarily converting them to use .NET 4.0. TestDriven.Net will execute your tests using the .NET version your test project is targeting (see ‘Properties > Application > Target framework’). There is now first class support for MSTest when using Visual Studio 2008 & 2010. Previous versions of TestDriven.Net had support for a limited number of MSTest attributes. This version supports virtually all MSTest unit testing related attributes, including support for deployment item and data driven test attributes. You should also find this test runner is quick. ;) There is a new ‘Go To Test/Code’ command on the code context menu. You can think of this as Ctrl-Tab for test driven developers; it will quickly flip back and forth between your tests and code under test. I recommend assigning a keyboard shortcut to the ‘TestDriven.NET.GoToTestOrCode’ command. NCover can now be used for code coverage on .NET 4.0. This is only officially supported since NCover 3.2 (your mileage may vary if you’re using the 1.5.8 version). Rather than clutter the ‘Output’ window, ignored or skipped tests will be placed on the ‘Task List’. You can double-click on these items to navigate to the offending test (or assign a keyboard shortcut to ‘View.NextTask’). If you’re using a Team, Premium or Ultimate edition of Visual Studio 2005-2010, a new ‘Test With > Performance’ command will be available. This command will perform instrumented performance profiling on your target code. A particular focus of this version has been to make it more keyboard friendly. Here’s a list of commands you will probably want to assign keyboard shortcuts to: Name Default What I use TestDriven.NET.RunTests Run tests in context   Alt + T TestDriven.NET.RerunTests Repeat test run   Alt + R TestDriven.NET.GoToTestOrCode Flip between tests and code   Alt + G TestDriven.NET.Debugger Run tests with debugger   Alt + D View.Output Show the ‘Output’ window Ctrl+ Alt + O   Edit.BreakLine Edit code in stack trace Enter   View.NextError Jump to next failed test Ctrl + Shift + F12   View.NextTask Jump to next skipped test   Alt + S   By default the ‘Output’ window will automatically activate when there is test output or a failed test (this is an option). The cursor will be positioned on the stack trace of the last failed test, ready for you to hit ‘Enter’ to jump to the fail point or ‘Esc’ to return to your source (assuming your ‘Output’ window is set to auto-hide).  If your ‘Output’ window isn’t set to auto-hide, you’ll need to hit ‘Ctrl + Alt + O’ then ‘Enter’. Alternatively you can use ‘Ctrl + Shift + F12’ (View.NextError) to navigate between all failed tests.   For more frequent updates or to give feedback, you can find me on twitter here. I hope you enjoy this version. Let me know how you get on. :)

    Read the article

  • How to push changes from Test server to Live server?

    - by anonymous
    As a beginner, I finally noticed the issue with making changes to the live server I've been working on, now that I have a couple users on it, since I bring it down so often. I created an EC2 image of my live server and set up a separate instance on EC2, so now I have 2 EC2 instances, Stage and Production. I set up GitHub and push changes to stage and test my code there, and when it's all done and working, I push it to the production branch, and everything is good. And there is a slight issue here since I name my files config_stage.js and config_production.js and set up .gitignore on each server, and in my code, I would have it read the ENV flags and set up the appropriate configs, is this the correct approach? And my main question is: how do you keep track of non-code changes to the server? For example, I installed HAProxy, Stunnel, Redis, MongoDB and several other things onto the Stage server for testing and now that it's all working and good, how do I deploy them to production? Right now, I'm just keeping track of everything I installed and copying configuration files over, which is very tedious and I'm afraid I may have missed a step somewhere. Is there a better way to port these changes over from my test server to my live server?

    Read the article

  • Why is testing MVC Views frowned upon?

    - by Peter Bernier
    I'm currently setting the groundwork for an ASP.Net MVC application and I'm looking into what sort of unit-tests I should be prepared to write. I've seen in multiple places people essentially saying 'don't bother testing your views, there's no logic and it's trivial and will be covered by an integration test'. I don't understand how this has become the accepted wisdom. Integration tests serve an entirely different purpose than unit tests. If I break something, I don't want to know a half-hour later when my integration tests break, I want to know immediately. Sample Scenario : Lets say we're dealing with a standard CRUD app with a Customer entity. The customer has a name and an address. At each level of testing, I want to verify that the Customer retrieval logic gets both the name and the address properly. To unit-test the repository, I write an integration test to hit the database. To unit-test the business rules, I mock out the repository, feed the business rules appropriate data, and verify my expected results are returned. What I'd like to do : To unit-test the UI, I mock out the business rules, setup my expected customer instance, render the view, and verify that the view contains the appropriate values for the instance I specified. What I'm stuck doing : To unit-test the repository, I write an integration test, setup an appropriate login, create the required data in the database, open a browser, navigate to the customer, and verify the resulting page contains the appropriate values for the instance I specified. I realize that there is overlap between the two scenarios discussed above, but the key difference it time and effort required to setup and execute the tests. If I (or another dev) removes the address field from the view, I don't want to wait for the integration test to discover this. I want is discovered and flagged in a unit-test that gets multiple times daily. I get the feeling that I'm just not grasping some key concept. Can someone explain why wanting immediate test feedback on the validity of an MVC view is a bad thing? (or if not bad, then not the expected way to get said feedback)

    Read the article

  • Creating SharePoint sites from xml using Powershell

    - by Norgean
    It is frequently useful to create / delete web applications in a development environment. If you need to create a structure, this can quickly become tedious. Enter Powershell, xml and recursive functions. Create the structure in xml. Something like: <Sites>     <Site Name="Test 1" Url="Test1" />     <Site Name="Test 2" Url="Test2" >         <Site Name="Test 2 1" Url="Test21" >             <Site Name="Test 2 1 1" Url="Test211" />             <Site Name="Test 2 1 2" Url="Test212" />         </Site>     </Site>     <Site Name="Test 3" Url="Test3" >         <Site Name="Test 3 1" Url="Test31" />         <Site Name="Test 3 2" Url="Test32" />         <Site Name="Test 3 3" Url="Test33" >             <Site Name="Test 3 3 1" Url="Test331" />             <Site Name="Test 3 3 2" Url="Test332" />         </Site>         <Site Name="Test 3 4" Url="Test34" />     </Site> </Sites> Read this structure in Powershell, and recursively create the sites. Oh, and have cool progress dialogs, too. $snap = Get-PSSnapin | Where-Object { $_.Name -eq "Microsoft.SharePoint.Powershell" } if ($snap -eq $null) {     Add-PSSnapin "Microsoft.SharePoint.Powershell" } function CreateSites($baseUrl, $sites, [int]$progressid) {     $sitecount = $sites.ChildNodes.Count     $counter = 0     foreach ($site in $sites.Site)     {         Write-Progress -ID $progressid -Activity "Creating sites" -status "Creating $($site.Name)" -percentComplete ($counter / $sitecount*100)         $counter = $counter + 1         Write-Host "Creating $($site.Name) $($baseUrl)/$($site.Url)"         New-SPWeb -Url "$($baseUrl)/$($site.Url)" -AddToQuickLaunch:$false -AddToTopNav:$false -Confirm:$false -Name "$($site.Name)" -Template "STS#0" -UseParentTopNav:$true         if ($site.ChildNodes.Count -gt 0)         {             CreateSites "$($baseUrl)/$($site.Url)" $site ($progressid +1)         }         Write-Progress -ID $progressid -Activity "Creating sites" -status "Creating $($site.Name)" -Completed     } } # read an xml file $xml = [xml](Get-Content "C:\Projects\Powershell\sites.xml") $xml.PreserveWhitespace = $false CreateSites "http://$($env:computername)" $xml.Sites 1 Easy! Sensible real life implementations will also include templateid in the xml, will check for existence of a site before creating it, etc.

    Read the article

  • When creating an library published on CodePlex, how "bad" would it be for the unit-test projects to rely on commercial products?

    - by Lasse V. Karlsen
    I have started a project on CodePlex for a WebDAV server implementation for .NET, so that I can host a WebDAV server in my own programs. This is both a learning/research project (WebDAV + server portion) as well as a project I think I can have much fun with, both in terms of making it and using it. However, I see a need to do mocking of types here in order to unit-testing properly. For instance, I will be relying on HttpListener for the web server portion of the WebDAV server, and since this type has no interface, and is sealed, I cannot easily make mocks or stubs out of it. Unless I use something like TypeMock. So if I used TypeMock in the unit-test projects on this library, how bad would this be for potential users? The projects are made in C# 3.5 for .NET 3.5 and 4.0, and the project files was created with Visual Studio 2010 Professional. The actual class libraries you would end up referencing in your software would of course not be encumbered with anything remotely like this, only the unit-test libraries. What's your thoughts on this? As an example, I have in my old code-base, which is private, the ability to just initiate a WebDAV server with just this: var server = new WebDAVServer(); This constructs, and owns, a HttpListener instance internally, and I would like to verify through unit-tests that if I dispose of this server object, the internal listener is disposed of. If, on the other hand, I use the overload where I hand it a listener object, this object should not be disposed of. Short of exposing the internal listener object to the outside world, something I'm a bit loath to do, how can I in a good way ensure that the object was disposed of? With TypeMock I can mock away parts of this object even though it isn't accessed through interfaces. The alternative would be for me to wrap everything in wrapper classes, where I have complete control.

    Read the article

  • Is anyone doing "real" TDD with Visual-C++, and if yes, how do they do it?

    - by Martin
    Test Driven Development implies writing the test before the code and following a certain cycle: Write Test Check Test (run) Write Production Code Check Test (run) Clean up Production Code Check test (run) As far as I'm concerned, this is only possible if your development solution allows you to very quickly switch between the production and test code, and to execute the test for a certain production code part extremely quickly. Now, while there exist lots of Unit Testing Frameworks for C++ (I'm using Bost.Test atm.) it does seem that there doesn't really exist any decent (for native C++) Visual Studio (Plugin) solution that makes the TDD cycle bearable regardless of framework used. "Bearable" means that it's a one-click action to run a test for a certain cpp file without having to manually set up a separate testing project etc. "Bearable" also means that a simple test starts (linking!) and runs very quickly. So, what tools (plugins) and techniques are out there that make the TDD cycle possible for native C++ development with Visual Studio? Note: I'm fine with free or "commercial" tools. Please: No framework recommendations. (Unless the framework has a dedicated Visual Studio plugin and you want to recommend the plugin.) Edit Note: The answers so far have provided links on how to integrate a Unit Testing framework into Visual Studio. The resources more or less describe how to get the UT framework to compile and get your first Tests running. This is not what this question is about. I'm of the opinion that to really work productively, having the Unit Tests in a manually maintained(!), separate vcproj from your production classes will add so much overhead that TDD "isn't possible". As far as I am aware, you do not add extra "projects" to a Java or C# thing to enable Unit Tests and TDD, and for a good reason. This should be possible with C++ given the right tools, but it seems (this question is about) that there are very little tools for TDD/C++/VS. Googling around, I've found one tool, VisualAssert, that seems to aim in the right direction. However, afaiks, it doesn't seem to be in widespread use (compared to CppUnit, Boost.Test etc.). Edit: I would like to add a comment to the context for this question. I think it does a good summary of outlining (part of) the problem: (comment by Billy ONeal) Visual Studio does not use "build scripts" that are reasonably editable by the user. One project produces one binary. Moreover, Java has the property that Java never builds a complete binary -- the binary you build is just a ZIP of the class files. Therefore it's possible to compile separately then JAR together manually (using e.g. 7z). C++ and C# both actually link their binaries, so generally speaking you can't write a script like that. The closest you can get is to compile everything separately and then do two linkings (one for production, one for testing).

    Read the article

  • Are there currently any modern, standardized, aptitude test for software engineering?

    - by Matthew Patrick Cashatt
    Background I am a working software engineer who is in the midst of seeking out a new contract for the next year or so. In my search, I am enduring several absurd technical interviews as indicated by this popular question I asked earlier today. Even if the questions I was being asked weren't almost always absurd, I would be tired nonetheless of answering them many times over for various contract opportunities. So this got me thinking that having a standardized exam that working software professionals could take would provide a common scorecard that could be referenced by interviewers in lieu of absurd technical interview questions (i.e. nerd hazing). Question Is there a standardized software engineering aptitude test (SEAT??) available for working professionals to take? If there isn't a such an exam out there, what questions or topics should be covered? An additional thought Please keep in mind, if suggesting a question or topic, to focus on questions or topics that would be relevant to contemporary development practices and realistic needs in the workforce as that would be the point of a standard aptitude test. In other words, no clown traversal questions.

    Read the article

  • How can I reduce the amount of time it takes to fully regression test an application ready for release?

    - by DrLazer
    An app I work on is being developed with a modified version of scrum. If you are not familiar with scrum, it's just an alternative approach to a more traditional watefall model, where a series of features are worked on for a set amount of time known as a sprint. The app is written in C# and makes use of WPF. We use Visual C# 2010 Express edition as an IDE. If we work on a sprint and add in a few new features, but do not plan to release until a further sprint is complete, then regression testing is not an issue as such. We just test the new features and give the app a good once over. However, if a release is planned that our customers can download - a full regression test is factored in. In the past this wasn't a big deal, it took 3 or 4 days and the devs simply fix up any bugs found in the regression phase, but now, as the app is getting larger and larger and incorporating more and more features, the regression is spanning out for weeks. I am interested in any methods that people know of or use that can decrease this time. At the moment the only ideas I have are to either start writing Unit Tests, which I have never fully tried out in a commercial environment, or to research the possibilty of any UI Automation API's or tools that would allow me to write a program to perform a series of batch tests. I know literally nothing about the possibilities of UI automation so any information would be valuable. I don't know that much about Unit testing either, how complicated can the tests be? Is it possible to get Unit tests to use the UI? Are there any other methods I should consider? Thanks for reading, and for any advice in advance. Edit: Thanks for the information. Does anybody know of any alternatives to what has been mentioned so far (NUnit, RhinoMocks and CodedUI)?

    Read the article

  • In Windows, a batch file with a recursive for loop and a file name including blanks

    - by uvts_cvs
    Hello, I have a folder tree, like this (it's only an example, it will be deeper in my real case): C:\test | +---folder1 | foo bar.txt | foobar.txt | +---folder2 | foo bar.txt | foobar.txt | \---folder3 foo bar.txt foobar.txt My files have one or more spaces in the name and I need to perform a command on them, so I am interested in foo bar.txt but not in foobar.txt. I tried (inside a batch file): for /r test %%f in (foo bar.txt) do if exist %%f echo %%f where the command is the simple echo. It does not work because the space is skipped and I get no output. This works but it is not what I need: for /r test %%f in (foobar.txt) do if exist %%f echo %%f It prints: C:\test\folder1\foobar.txt C:\test\folder2\foobar.txt C:\test\folder3\foobar.txt I tried using the quotation mark (") but it does not work: for /r test %%f in ("foo bar.txt") do if exist %%f echo %%f It does not work because the quotation mark is still included in the output: C:\test\folder1\"foo bar.txt" C:\test\folder2\"foo bar.txt" C:\test\folder3\"foo bar.txt"

    Read the article

  • Happy Day! VS2010 SP1, Project Server Integration, Load Test Feature Pack

    - by Aaron Kowall
    Microsoft released a PILE of Visual Studio goodness today: Visual Studio 2010 SP1(Including TFS SP1) Finally done with remembering which GDR packs, KB Patches, etc need to be installed with a new VS/TFS 2010 deployment.  Just grab the SP1.  It’s available today for MSDN Subscribers and March 10th for public download. TFS-Project Server Integration Feature Pack MSDN Subscribers got another little treat today with the TFS-Project Server integration feature pack.  We can now get project rollups and portfolio level management with Project Server yet still have the tight developer interaction with TFS.  Finally we can make the PMO happy without duplicate entry or MS Project gymnastics. Visual Studio Load Test Feature Pack This is a new benefit for Visual Studio 2010 Ultimate subscribers.  Previously there was a limit to Ultimate Load Testing of 250 virtual users. If you needed more, you had to buy virtual user license packs.  No more.  Now your Visual Studio Ultimate license allows you to simulate as many virtual users as you need!!  This is HUGE in improving adoption of regular load testing for development projects. All the Details are available from Soma’s blog. Technorati Tags: VS2010,TFS,Load Test

    Read the article

  • Tool to convert a file of HEX to ASCII character set?

    - by Aaron
    Question: Is there a known tool to convert a file consisting of 2 byte Hex into ascii? Note: - Maintain file offset listing in bytes Example: File contents: 00000000 0054 0065 0073 0074 0020 0054 0065 0073 00000008 0074 0020 0054 0065 0073 0074 0020 0054 00000016 0065 0073 0074 0020 0054 0065 0073 0074 00000024 0020 0054 0065 0073 0074 0020 0054 0065 00000032 0073 0074 0020 0054 0065 0073 0074 0020 00000040 0054 0065 0073 0074 000a 0054 0065 0073 00000048 0074 0020 0054 0065 0073 0074 0020 0054 00000056 0065 0073 0074 0020 0054 0065 0073 0074 00000064 0020 0054 0065 0073 0074 0020 0054 0065 Expected output 00000016 0065 0073 0074 0020 0054 0065 0073 0074 |est Test Test Te| 00000032 0073 0074 0020 0054 0065 0073 0074 0020 |st Test Test.Tes| 00000048 0074 0020 0054 0065 0073 0074 0020 0054 |t Test Test Test| 00000064 0020 0054 0065 0073 0074 0020 0054 0065 | Test Test Test |

    Read the article

  • exported variable not persisted after script execution

    - by Daniele
    I'm facing a wierd issue. I've a vm with solaris 11, and trying to write some bash scripts. if, on the shell, I type : export TEST=aaa and subsequently run: set I correctly see a new environment variable named TEST whose value is aaa. If, however I do basically the same thing in a script. when the script terminates, I do not see the variable set. To make a concrete example, if in a file test.sh I have: #!/usr/bin/bash echo 1: $TEST #variable not defined yet, expect to print only 1: echo 2: $USER TEST=sss echo 3: $TEST export TEST echo 4: $TEST it prints: 1: 2: daniele 3: sss 4: sss and after its execution, TEST is not set in the shell. Am I missing something? I tried both to do export TEST=sss and the separate variable set/export with no difference.

    Read the article

  • How to unit test models in MVC / MVR app?

    - by BBnyc
    I'm building a node.js web app and am trying to do so for the first time in a test driven fashion. I'm using nodeunit for testing, which I find allows me to write tests quickly and painlessly. In this particular app, the heavy lifting primarily involves translating SQL data into complex Javascript object and serving them to the front-end via json. Likewise, the app also spends a great deal of code validating and translating complex, multidimensional Javascript objects it receives from the front-end into SQL rows. Hence I have used a fat model design for the app -- most of the real code resides in the models, where the data translation happens. What's the best approach to test such models with unit tests? I mean in particular the methods that have create javascript objects from the SQL rows and serve them to the front-end. Right now what I'm doing is making particular requests of my models with the unit tests and checking the returned data for all of the fields that should be there. However I have a suspicion that this is not the most robust kind of testing I could be doing. My current testing design also means I have to package my app code with some dummy data so that my tests can anticipate the kind of data that the app should be returning when tests run.

    Read the article

  • How can I reduce the amount of time it takes to fully regression test an application ready for release?

    - by DrLazer
    An app I work on is being developed with a modified version of scrum. If you are not familiar with scrum, it's just an alternative approach to a more traditional watefall model, where a series of features are worked on for a set amount of time known as a sprint. The app is written in C# and makes use of WPF. We use Visual C# 2010 Express edition as an IDE. If we work on a sprint and add in a few new features, but do not plan to release until a further sprint is complete, then regression testing is not an issue as such. We just test the new features and give the app a good once over. However, if a release is planned that our customers can download - a full regression test is factored in. In the past this wasn't a big deal, it took 3 or 4 days and the devs simply fix up any bugs found in the regression phase, but now, as the app is getting larger and larger and incorporating more and more features, the regression is spanning out for weeks. I am interested in any methods that people know of or use that can decrease this time. At the moment the only ideas I have are to either start writing Unit Tests, which I have never fully tried out in a commercial environment, or to research the possibilty of any UI Automation API's or tools that would allow me to write a program to perform a series of batch tests. I know literally nothing about the possibilities of UI automation so any information would be valuable. I don't know that much about Unit testing either, how complicated can the tests be? Is it possible to get Unit tests to use the UI? Are there any other methods I should consider? Thanks for reading, and for any advice in advance.

    Read the article

  • How can I test linkable/executable files that require re-hosting or retargeting?

    - by hagubear
    Due to data protection, I cannot discuss fine details of the work itself so apologies PROBLEM CASE Sometimes my software projects require merging/integration with third party (customer or other suppliers) software. these software are often in linkable executables or object code (requires that my source code is retargeted and linked with it). When I get the executables or object code, I cannot validate its operation fully without integrating it with my system. My initial idea is that executables are not meant to be unit tested, they are meant to be linkable with other system, but what is the guarantee that post-linkage and integration behaviour will be okay? There is also no sufficient documentation available (from the customer) to indicate how to go about integrating the executables or object files. I know this is philosophical question, but apparently not enough research could be found at this moment to conclude to a solution. I was hoping that people could help me go to the right direction by suggesting approaches. To start, I have found out that Avionics OEM software is often rehosted and retargeted by third parties e.g. simulator makers. I wonder how they test them. Surely, the source code will not be supplied due to IPR rgulations. UPDATE I have received reasonable and very useful suggestions regarding this area. My current struggle has shifted into testing 3rd party OBJECT code that needs to be linked with my own source code (retargeted) on my host machine. How can I even test object code? Surely, I need to link them first to even think about doing anything. Is it the post-link behaviour that needs to be determined and scripted (using perl,Tcl, etc.) so that inputs and outputs could be verified? No clue!! :( thanks,

    Read the article

  • How to unit test a Spring MVC controller using @PathVariable?

    - by Martiner
    I have a simple annotated controller similar to this one: @Controller public class MyController { @RequestMapping("/{id}.html") public String doSomething(@PathVariable String id, Model model) { // do something return "view"; } } and I want to test it with an unit test like this: public class MyControllerTest { @Test public void test() { MockHttpServletRequest request = new MockHttpServletRequest(); request.setRequestURI("/test.html"); new AnnotationMethodHandlerAdapter() .handle(request, new MockHttpServletResponse(), new MyController()); // assert something } } The problem is that AnnotationMethodHandlerAdapter.handler() method throws an exception: java.lang.IllegalStateException: Could not find @PathVariable [id] in @RequestMapping at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter$ServletHandlerMethodInvoker.resolvePathVariable(AnnotationMethodHandlerAdapter.java:642) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.resolvePathVariable(HandlerMethodInvoker.java:514) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.resolveHandlerArguments(HandlerMethodInvoker.java:262) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:146)

    Read the article

  • How to manage test fixtures for end-to-end testing?

    - by Peter Becker
    Having just set up a test framework for a new web application, I realized I missed one of the big questions: "How do I make tests independent from each other?" Years ago I have set up some complicated Ant scripting to do full cycles of deleting all database tables, creating the schema again, adding test data, starting the application, running one test and then stopping the application. That was a pain to maintain and restricted us to nightly tests due to the time it took to run the full suite. It was still worth it, but I wonder if there is an easier way. Are there alternatives to this approach? The main criterion is that each test should not be affected by any other test in the suite, no matter if it failed or succeeded.

    Read the article

  • How do I properly unit test a Django session?

    - by thebossman
    The behavior of Django sessions changes between "standard" views code and test code, making it unclear how test code is written for sessions. Googling this yields two relevant discussions about this issue: Easier manipulation of sessions by test client test.Client.session.save() raises error for anonymous users I'm confused because both tickets have different ways of dealing with this problem and they were both Accepted. I assume this means they were patched and the behavior is now different. I also don't know to which versions these patches would pertain. If I'm writing a unit test in Django 1.0, how would I set up my session store for sessions to work as they do in the browser?

    Read the article

  • How do I give each test its own TestResults folder?

    - by izb
    I have a set of unit tests, each with a bunch of methods, each of which produces output in the TestResults folder. At the moment, all the test files are jumbled up in this folder, but I'd like to bring some order to the chaos. Ideally, I'd like to have a folder for each test method. I know I can go round adding code to each test to make it produce output in a subfolder instead, but I was wondering if there was a way to control the output folder location with the Visual Studio unit test framework, perhaps using an initialization method on each test class so that any new tests added automatically get their own output folder without needing copy/pasted boilerplate code?

    Read the article

  • How to get a handle/reference to the current controller object inside a rails functional test?

    - by Dave Paroulek
    I must be missing something very simple, but can't find the answer to this. I have a method named foo inside bar_controller. I simply want to call that method from inside a functional test. Here's my controller: class BarsController < ApplicationController def foo # does stuff end end Here's my functional test: class BarsControllerTest << ActionController::TestCase def "test foo" do # run foo foo # assert stuff end end When I run the test I get: NameError: undefined local variable or method `foo' for #<BarsControllerTest:0x102f2eab0> All the documentation on functional tests describe how to simulate a http get request to the bar_controller which then runs the method. But I'd just like to run the method without hitting it with an http get or post request. Is that possible? There must be a reference to the controller object inside the functional test, but I'm still learning ruby and rails so need some help.

    Read the article

  • How to mock test a web service in PHPUnit across multiple tests?

    - by scraton
    I am attempting to test a web service interface class using PHPUnit. Basically, this class makes calls to a SoapClient object. I am attempting to test this class in PHPUnit using the "getMockFromWsdl" method described here: http://www.phpunit.de/manual/current/en/test-doubles.html#test-doubles.stubbing-and-mocking-web-services However, since I want to test multiple methods from this same class, every time I setup the object, I also have to setup the mock WSDL SoapClient object. This is causing a fatal error to be thrown: Fatal error: Cannot redeclare class xxxx in C:\web\php5\PEAR\PHPUnit\Framework\TestCase.php(1227) : eval()'d code on line 15 How can I use the same mock object across multiple tests without having to regenerate it off the WSDL each time? That seems to be the problem.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >