Search Results

Search found 10078 results on 404 pages for 'smoke testing'.

Page 26/404 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • VS2008: File creation fails randomly in unit testing?

    - by Tim
    I'm working on implementing a reasonably simple XML serializer/deserializer (log file parser) application in C# .NET with VS 2008. I have about 50 unit tests right now for various parts of the code (mostly for the various serialization operations), and some of them seem to be failing mostly at random when they deal with file I/O. The way the tests are structured is that in the test setup method, I create a new empty file at a certain predetermined location, and close the stream I get back. Then I run some basic tests on the file (varying by what exactly is under test). In the cleanup method, I delete the file again. A large portion (usually 30 or more, though the number varies run to run) of my unit tests will fail at the initialize method, claiming they can't access the file I'm trying to create. I can't pin down the exact reason, since a test that will work one run fails the next; they all succeed when run individually. What's the problem here? Why can't I access this file across multiple unit tests? Relevant methods for a unit test that will fail some of the time: [TestInitialize()] public void LogFileTestInitialize() { this.testFolder = System.Environment.GetFolderPath( System.Environment.SpecialFolder.LocalApplicationData ); this.testPath = this.testFolder + "\\empty.lfp"; System.IO.File.Create(this.testPath); } [TestMethod()] public void LogFileConstructorTest() { string filePath = this.testPath; LogFile target = new LogFile(filePath); Assert.AreNotEqual(null, target); Assert.AreEqual(this.testPath, target.filePath); Assert.AreEqual("empty.lfp", target.fileName); Assert.AreEqual(this.testFolder + "\\empty.lfp.lfpdat", target.metaPath); } [TestCleanup()] public void LogFileTestCleanup() { System.IO.File.Delete(this.testPath); } And the LogFile() constructor: public LogFile(String filePath) { this.entries = new List<Entry>(); this.filePath = filePath; this.metaPath = filePath + ".lfpdat"; this.fileName = filePath.Substring(filePath.LastIndexOf("\\") + 1); } The precise error message: Initialization method LogFileParserTester.LogFileTest.LogFileTestInitialize threw exception. System.IO.IOException: System.IO.IOException: The process cannot access the file 'C:\Users\<user>\AppData\Local\empty.lfp' because it is being used by another process..

    Read the article

  • Null reference to DataContext when testing an ASP.NET MVC app with NUnit

    - by user252160
    I have an ASP.NET MVC application with a separate project added for tests. I know the plusses and minuses of using the connection to the database when running unit tests, and I still want to use it. Yet, every time when I run the tests with the NUnit tool, they all fail due to my Data Context being null. I heard something about having a separate config file for the tests assembly, but i am not sure whether I did it properly, or whether that works at all.

    Read the article

  • Rhapsody TestConductor Experiences

    - by vaiomike
    I was wondering whether anybody out there is actively using Rhapsody TestConductor? Or has tried it for a while, but then decided to turn it down for a particular reason? If so, what are your experiences, in which field do you apply it, what are the shortcomings, or why did you turn it down? At the moment we're considering TestConductor as our tool of choice for testing as it's already integrated into Rhapsody, and would like to find out how applicable it is to our project (btw, we're using Rhapsody 7.4 in C). P.S: Recommendations on good books about Model Based Testing are also appreciated.

    Read the article

  • Testing install procedure of a program requiring administrative privileges

    - by Lucas Meijer
    I'm trying to write automated test, to ensure that the installer for my program works okay. The program can be installed for all users (requires admin privs), or for current user (does not require admin privs). The program can also autoupdate itself, which in some cases requires admin privileges, and in some cases doesn't. I'm looking for a way where I can have an automated test click "Yes, Allow" on the UAC dialogs, so I can write tests for all different scenarios, on many different operating systems, so that I can be confident when I make changes to the installer that I didn't break anything. Obviously, the installer process itself cannot do this. However, I control the complete machine, and could easily start some sort of daemon process with administrative rights, that the testprogram could make a socket connection to, to request it to "please click ok on the UAC now".

    Read the article

  • Testing chess game

    - by mousey
    There is a software for chess game and we need to test the following method: boolean canMoveTo(int x, int y) x and y are the coordinates of the chess board and it returns true/false whether the piece can move to that position or not. We need to test this method for a pawn piece and you can set up the board any way you like prior to running a test case. Source code is not provided

    Read the article

  • Unit testing and mocking email sender in Python with Google AppEngine

    - by CVertex
    I'm a newbie to python and the app engine. I have this code that sends an email based on request params after some auth logic. in my Unit tests (i'm using GAEUnit), how do I confirm an email with specific contents were sent? - i.e. how do I mock the emailer with a fake emailer to verify send was called? class EmailHandler(webapp.RequestHandler): def bad_input(self): self.response.set_status(400) self.response.headers['Content-Type'] = 'text/plain' self.response.out.write("<html><body>bad input </body></html>") def get(self): to_addr = self.request.get("to") subj = self.request.get("subject") msg = self.request.get("body") if not mail.is_email_valid(to_addr): # Return an error message... # self.bad_input() pass # authenticate here message = mail.EmailMessage() message.sender = "[email protected]" message.to = to_addr message.subject = subj message.body = msg message.send() self.response.headers['Content-Type'] = 'text/plain' self.response.out.write("<html><body>success!</body></html>") And the unit tests, import unittest from webtest import TestApp from google.appengine.ext import webapp from email import EmailHandler class SendingEmails(unittest.TestCase): def setUp(self): self.application = webapp.WSGIApplication([('/', EmailHandler)], debug=True) def test_success(self): app = TestApp(self.application) response = app.get('http://localhost:8080/[email protected]&body=blah_blah_blah&subject=mySubject') self.assertEqual('200 OK', response.status) self.assertTrue('success' in response) # somehow, assert email was sent

    Read the article

  • Unit testing a controller method?

    - by Stefan Kendall
    I have a controller method like such: def search = { def query = params.query ... render results as JSON } How do I unit test this? Specifically, how do I call search to set params.query, and how do I test the results of the method render? Is there a way to mock the render method, perhaps?

    Read the article

  • Correct way of using/testing event service in Eclipse E4 RCP

    - by Thorsten Beck
    Allow me to pose two coupled questions that might boil down to one about good application design ;-) What is the best practice for using event based communication in an e4 RCP application? How can I write simple unit tests (using JUnit) for classes that send/receive events using dependency injection and IEventBroker ? Let’s be more concrete: say I am developing an Eclipse e4 RCP application consisting of several plugins that need to communicate. For communication I want to use the event service provided by org.eclipse.e4.core.services.events.IEventBroker so my plugins stay loosely coupled. I use dependency injection to inject the event broker to a class that dispatches events: @Inject static IEventBroker broker; private void sendEvent() { broker.post(MyEventConstants.SOME_EVENT, payload) } On the receiver side, I have a method like: @Inject @Optional private void receiveEvent(@UIEventTopic(MyEventConstants.SOME_EVENT) Object payload) Now the questions: In order for IEventBroker to be successfully injected, my class needs access to the current IEclipseContext. Most of my classes using the event service are not referenced by the e4 application model, so I have to manually inject the context on instantiation using e.g. ContextInjectionFactory.inject(myEventSendingObject, context); This approach works but I find myself passing around a lot of context to wherever I use the event service. Is this really the correct approach to event based communication across an E4 application? how can I easily write JUnit tests for a class that uses the event service (either as a sender or receiver)? Obviously, none of the above annotations work in isolation since there is no context available. I understand everyone’s convinced that dependency injection simplifies testability. But does this also apply to injecting services like the IEventBroker? This article describes creation of your own IEclipseContext to include the process of DI in tests. Not sure if this could resolve my 2nd issue but I also hesitate running all my tests as JUnit Plug-in tests as it appears impractible to fire up the PDE for each unit test. Maybe I just misunderstand the approach. This article speaks about “simply mocking IEventBroker”. Yes, that would be great! Unfortunately, I couldn’t find any information on how this can be achieved. All this makes me wonder whether I am still on a "good path" or if this is already a case of bad design? And if so, how would you go about redesigning? Move all event related actions to dedicated event sender/receiver classes or a dedicated plugin?

    Read the article

  • Downsides to using FakeWeb compared to writing mocks for testing

    - by ajmurmann
    I never liked writing mocks and a while ago someone here recommended to use FakeWeb. I immediately fell completely in love with FakeWeb. However, I have to wonder if there is a downside to using FakeWeb. It seems like mocks are still much more common, so I wonder what I am missing that's wrong with using FakeWeb instead. Is there a certain kind of error you can't cover with Fakeweb or is it something about the TDD or BDD process?

    Read the article

  • Shoulda and Paperclip testing

    - by trobrock
    I am trying to test a couple models that have an attachment with Paperclip. I have all of my validations passing except for the content-type check. # myapp/test/unit/project_test.rb should_have_attached_file :logo should_validate_attachment_presence :logo should validate_attachment_size(:logo).less_than(1.megabyte) should_validate_attachment_content_type :logo, :valid => ["image/png", "image/jpeg", "image/pjpeg", "image/x-png"] # myapp/app/models/project.rb has_attached_file :logo, :styles => { :small => "100x100>", :medium => "200x200>" } validates_attachment_presence :logo validates_attachment_size :logo, :less_than => 1.megabyte validates_attachment_content_type :logo, :content_type => ["image/png", "image/jpeg", "image/pjpeg", "image/x-png"] The errors I am getting: 1) Failure: test: Client should validate the content types allowed on attachment logo. (ClientTest) [/Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/assertions.rb:55:in `assert_accepts' vendor/plugins/paperclip/shoulda_macros/paperclip.rb:44:in `__bind_1276100387_499280' /Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `call' /Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `test: Client should validate the content types allowed on attachment logo. ']: Content types image/png, image/jpeg, image/pjpeg, image/x-png should be accepted and rejected by logo This happens on two different models that are set up the same way.

    Read the article

  • Flex Unit testing of library and mxml using FlexUnit

    - by user344722
    Hi, I have some software classes(library) to run commands on any mxml file. These classes(library) are wrapped in a SWC file. This SWC file is referenced by any sample mxml application (by adding as SWC file). My problem is that I want to test these software classes(library) against my sample mxml file using FlexUnit. That is, I should test methods run by software classes on the mxml file. How can I accomplish this? Thanks, Pradeep

    Read the article

  • Django: How to create a model dynamically just for testing

    - by muhuk
    I have a Django app that requires a settings attribute in the form of: RELATED_MODELS = ('appname1.modelname1.attribute1', 'appname1.modelname2.attribute2', 'appname2.modelname3.attribute3', ...) Then hooks their post_save signal to update some other fixed model depending on the attributeN defined. I would like to test this behaviour and tests should work even if this app is the only one in the project (except for its own dependencies, no other wrapper app need to be installed). How can I create and attach/register/activate mock models just for the test database? (or is it possible at all?) Solutions that allow me to use test fixtures would be great.

    Read the article

  • VSTS 2008 Load testing, Is it any good?

    - by anshu
    I have already spent couple of weeks trying to use this tool to generate some webtest and load test. But every day it throws a weird problem for which I do not find anything in document. examples: Hidden variables (_lastfocus) not found in the context error. Today, all of sudden it is now refusing to run some of the webtest which are part of the test mix in my load test run (is working fine with another load test). Are enterprise level, expensive tools are only good? (like loadrunner, silkperformer etc).

    Read the article

  • Testing a Non-blocking Queue

    - by jsw
    I've ported the non-blocking queue psuedocode here to C#. The code below is meant as a near verbatim copy of the paper. What approach would you take to test the implementation? Note: I'm running in VS2010 so I don't have CHESS support yet. using System.Threading; #pragma warning disable 0420 namespace ConcurrentCollections { class QueueNodePointer<T> { internal QueueNode<T> ptr; internal QueueNodePointer() : this(null) { } internal QueueNodePointer(QueueNode<T> ptr) { this.ptr = ptr; } } class QueueNode<T> { internal T value; internal QueueNodePointer<T> next; internal QueueNode() : this(default(T)) { } internal QueueNode(T value) { this.value = value; this.next = new QueueNodePointer<T>(); } } public class ConcurrentQueue<T> { private volatile int count = 0; private QueueNodePointer<T> qhead = new QueueNodePointer<T>(); private QueueNodePointer<T> qtail = new QueueNodePointer<T>(); public ConcurrentQueue() { var node = new QueueNode<T>(); node.next.ptr = null; this.qhead.ptr = this.qtail.ptr = node; } public int Count { get { return this.count; } } public void Enqueue(T value) { var node = new QueueNode<T>(value); node.next.ptr = null; QueueNodePointer<T> tail; QueueNodePointer<T> next; while (true) { tail = this.qtail; next = tail.ptr.next; if (tail == this.qtail) { if (next.ptr == null) { var newtail = new QueueNodePointer<T>(node); if (Interlocked.CompareExchange(ref tail.ptr.next, newtail, next) == next) { Interlocked.Increment(ref this.count); break; } else { Interlocked.CompareExchange(ref this.qtail, new QueueNodePointer<T>(next.ptr), tail); } } } } Interlocked.CompareExchange(ref this.qtail, new QueueNodePointer<T>(node), tail); } public T Dequeue() { T value; while (true) { var head = this.qhead; var tail = this.qtail; var next = head.ptr.next; if (head == this.qhead) { if (head.ptr == tail.ptr) { if (next.ptr == null) { return default(T); } Interlocked.CompareExchange(ref this.qtail, new QueueNodePointer<T>(next.ptr), tail); } else { value = next.ptr.value; var newhead = new QueueNodePointer<T>(next.ptr); if (Interlocked.CompareExchange(ref this.qhead, newhead, head) == head) { Interlocked.Decrement(ref this.count); break; } } } } return value; } } } #pragma warning restore 0420

    Read the article

  • Unit testing opaque structure based C API

    - by Nicolas Goy
    I have a library I wrote with API based on opaque structures. Using opaque structures has a lot of benefits and I am very happy with it. Now that my API are stable in term of specifications, I'd like to write a complete battery of unit test to ensure a solid base before releasing it. My concern is simple, how do you unit test API based on opaque structures where the main goal is to hide the internal logic? For example, let's take a very simple object, an array with a very simple test: WSArray a = WSArrayCreate(); int foo = 5; WSArrayAppendValue(a, &foo); int *bar = WSArrayGetValueAtIndex(a, 0); if(&foo != bar) printf("Eroneous value returned\n"); else printf("Good value returned\n"); WSRelease(a); Of course, this tests some facts, like the array actually acts as wanted with 1 value, but when I write unit tests, at least in C, I usualy compare the memory footprint of my datastructures with a known state. In my example, I don't know if some internal state of the array is broken. How would you handle that? I'd really like to avoid adding codes in the implementation files only for unit testings, I really emphasis loose coupling of modules, and injecting unit tests into the implementation would seem rather invasive to me. My first thought was to include the implementation file into my unit test, linking my unit test statically to my library. For example: #include <WS/WS.h> #include <WS/Collection/Array.c> static void TestArray(void) { WSArray a = WSArrayCreate(); /* Structure members are available because we included Array.c */ printf("%d\n", a->count); } Is that a good idea? Of course, the unit tests won't benefit from encapsulation, but they are here to ensure it's actually working.

    Read the article

  • Django Testing: Faking User Creation

    - by Ygam
    I want to better write this test: def test_profile_created(self): self.client.post(reverse('registration_register'), data={ 'username':'ygam', 'email':'[email protected]', 'password1':'ygam', 'password2':'ygam' }) """ Test if a profile is created on save """ user = User.objects.get(username='ygam') self.assertTrue(UserProfile.objects.filter(user=user).exists()) and I just came upon this code on django-registration tests that does not actually "create" the user: def test_registration_signal(self): def receiver(sender, **kwargs): self.failUnless('user' in kwargs) self.assertEqual(kwargs['user'].username, 'bob') self.failUnless('request' in kwargs) self.failUnless(isinstance(kwargs['request'], WSGIRequest)) received_signals.append(kwargs.get('signal')) received_signals = [] signals.user_registered.connect(receiver, sender=self.backend.__class__) self.backend.register(_mock_request(), username='bob', email='[email protected]', password1='secret') self.assertEqual(len(received_signals), 1) self.assertEqual(received_signals, [signals.user_registered]) However he used a custom function for this "_mock_request": class _MockRequestClient(Client): def request(self, **request): environ = { 'HTTP_COOKIE': self.cookies, 'PATH_INFO': '/', 'QUERY_STRING': '', 'REMOTE_ADDR': '127.0.0.1', 'REQUEST_METHOD': 'GET', 'SCRIPT_NAME': '', 'SERVER_NAME': 'testserver', 'SERVER_PORT': '80', 'SERVER_PROTOCOL': 'HTTP/1.1', 'wsgi.version': (1,0), 'wsgi.url_scheme': 'http', 'wsgi.errors': self.errors, 'wsgi.multiprocess':True, 'wsgi.multithread': False, 'wsgi.run_once': False, 'wsgi.input': None, } environ.update(self.defaults) environ.update(request) request = WSGIRequest(environ) # We have to manually add a session since we'll be bypassing # the middleware chain. session_middleware = SessionMiddleware() session_middleware.process_request(request) return request def _mock_request(): return _MockRequestClient().request() However, it may be too long of a function for my needs. I want to be able to somehow "fake" the account creation. I have not much experience on mocks and stubs so any help would do. Thanks!

    Read the article

  • Problem with load testing Web Service - VSTS 2008

    - by Carlos
    Hello, I have a webtest with makes a simple call to a WebService which looks like that: MyWebService webService = new MyWebService(); webService.Timeout = 180000; webService.myMethod(); I am not using ThinkTimes, also the Run Duration is set to 5 minutes. When I ran this test simulating only 1 user, I check the counters and I found something like that: Tests Total: 4500 Network Interface\Bytes sent (agent machine): 35,500 Then I ran the same tests, but this time simulating 2 users and I got something like that: Tests Total: 2225 Network Interface\Bytes sent (agent machine): 30,500 So when I increased the numbers of users the tests/sec was half than when I use only 1 user and the bytes sent by the agent was also lower. I think it is strange, because it doesn't seems I have a bottleneck in my agent machine since CPU is never higher than 30% and I have over 1.5GB of RAM free, also my network utilization is like 0.5% of its capacity. In order to troubleshot this I ran a test using Step Pattern, the simulated users went from 20 to 800 users. When I check the requests/sec it is practically constant through the whole test, so it is clear there is something in my test or my environment which is preventing the number of requests from gets higher. It would be a expected behavior if the "response time" was getting higher because it would tell me the requests wasn't been processed properly, but the strange thing is the response time is practically constant all the time and it is pretty low actually. I have no idea why my agent can't send more requests when I increase the numbers of users, any help/tip/guess would be really appreciate.

    Read the article

  • How to setup testing LAMP environment to work with outsourcing companies?

    - by Kelvin
    Hello Guys, I need to setup testing LAMP environment in my office to work with outsourcing companies. This is what I think should be done on my side: Setup testing web server with the same configuration as on production Setup testing SQL server with "fake data"? Outsourcers should have access only to some part of original code Outsourcers should use CVS to update their code Once testing is finished someone releases the update ............ How would you separate original code and database from testing environment, but keep it as close as possible to production? What is the general practice for setting up testing environment and how other companies deal with outsourcers? I will appreciate for any of your thoughts and ideas from your personal experience. Maybe someone can suggest some article on this topic. Thank you a lot!

    Read the article

  • Lightweight web browser for testing

    - by Ghostrider
    I have e very specific test setup in mind. I would like to start a web-browser that understands Javascript and can use HTTP proxy, point it to a URL (ideally by specifying it in the command line along with the proxy config), wait for the page to load while listening (in the proxy) requests are generated as web-page is rendered and Javascript is executed, then kill the whole thing and restart. I don't care about how the page renders graphically at all. Which browser or tool should I use for this? Ideally it should be something self-contained that doesn't require installation (just an EXE file that runs from command line). Lynx would have been ideal but for the fact that it doesn't support JS. It should have as small memory footprint as possible.

    Read the article

  • Ruby on Rails: Accessing production database data for testing

    - by williamjones
    With Ruby on Rails, is there a way for me to dump my production database into a form that the test part of Rails can access? I'm thinking either a way to turn the production database into fixtures, or else a way to migrate data from the production database into the test database that will not get routinely cleared out by Rails. I'd like to use this data for a variety of tests, but foremost in my mind is using real data with the performance tests, so that I can get a realistic understanding of load times.

    Read the article

  • Simplifying Testing through design considerations while utilizing dependency injection

    - by Adam Driscoll
    We are a few months into a green-field project to rework the Logic and Business layers of our product. By utilizing MEF (dependency injection) we have achieved high levels of code coverage and I believe that we have a pretty solid product. As we have been working through some of the more complex logic I have found it increasingly difficult to unit test. We are utilizing the CompositionContainer to query for types required by these complex algorithms. My unit tests are sometimes difficult to follow due to the lengthy mock object setup process that must take place, just right, to allow for certain circumstances to be verified. My unit tests often take me longer to write than the code that I'm trying to test. I realize this is not only an issue with dependency injection but with design as a whole. Is poor method design or lack of composition to blame for my overly complex tests? I've tried base classing tests, creating commonly used mock objects and ensuring that I utilize the container as much as possible to ease this issue but my tests always end up quite complex and hard to debug. What are some tips that you've seen to keep such tests concise, readable, and effective?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >