Search Results

Search found 6839 results on 274 pages for 'functional tests'.

Page 110/274 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Custom DataType in DataTemplate breaks WPF designer

    - by PRINCESS FLUFF
    Why does the DataTemplate line break the WPF designer in Visual Studio 2008? The program compiles and runs properly. The DataTemplate is applied as it should. However the entire DataTemplate block of code is underlined in red, and when I simply "build" the program without running, I get the error "Type reference cannot find public type named 'Character'" How come it can't find it in the designer yet the program applies the template properly? <UserControl x:Class="WPF_Tests.Tests.TwoCollecViews.TwoViews" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:DetailsPane="clr-namespace:WPF_Tests.Tests.DetailsPane" > <UserControl.Resources> <DataTemplate DataType="{x:Type DetailsPane:Character}"> <StackPanel Orientation="Horizontal"> <TextBlock Text="{Binding Path=Name}"></TextBlock> </StackPanel> </DataTemplate> </UserControl.Resources> <Grid> <ListBox ItemsSource="{Binding Path=Characters}" /> </Grid> </UserControl> EDIT: I am being told that this may be a bug in Visual Studio 2008, as it worked correctly in 2010. You can download the code here: http://www.mediafire.com/?z1myytvwm4n - The Test/TwoCollec xaml file's designer will break with this code.

    Read the article

  • Unit test with live data

    - by Kurresmack
    Hey, I have googled this a little and didn't really find the answer I needed. I am working on a webpage in C# with MSSQL and LINQ for a customer. I want the users to be able to send messages to each other. So what I do is that I unit test this with live data. The problem is that I now depend on having at least 2 users who I know the ID of. Furthermore I have to clean up after my self. This leads to rather large unit tests that test alot in one test. Lets say I would like to update a user. That would mean that I would have to ceate the user, update it, and then delete it. This a lot of assertions in one unit test and if it fails with updating i have to manually delete it. If I would do it any other way, without live data, I would not fore sure be able to know that the data was present in the database after updating etc. What is the proper way to do this without having a test that tests a lot of functuality by it self?

    Read the article

  • click event launched only once problem

    - by user281180
    I have a form in which I have many checkboxes. I need to post the data to the controller upon any checkbox checked or unchecked, i.e a click on a checbox must post to the controller, and there is no submit button. What will be the bet method in this case? I have though of Ajax.BeginForm and have the codes below. The problem im having is that the checkbox click event is being detected only once and after that the click event isnt being launched. Why is that so? How can I correct that? <% using (Ajax.BeginForm("Edit", new AjaxOptions { UpdateTargetId = "tests"})) {%> <div id="tests"> <%Html.RenderPartial("Details", Model); %> </div> <input type="submit" value="Save" style="Viibility:hidden" id="myForm"/> <%} %> $(function() { $('input:checkbox').click(function() { $('#myForm').click(); }); });

    Read the article

  • How to profile Doctrine in Zend Framework

    - by David Zapata
    Good day. I'm using Doctrine as ORM for my Zend Framework project. This is the first time I use it. I've followed the ZendCasts Doctrine chapters, and everything works for me, but I needed to perform some profiling; There is a Doctrine_Connection_Profiler class that should be used to profile the Doctrine Model internal queries, but I've tried to use it without success. I always get a "PDOException: You cannot serialize or unserialize PDOStatement instances" exception when I perform my Unit Tests. Here is a example: $conn = Doctrine_Manager::connection($doctrineConfig['dsn'], $dbconfname); ... if( APPLICATION_ENV != 'production'){ $obj_doctrine_profiler = new Doctrine_Connection_Profiler(); $conn->setListener($obj_doctrine_profiler); } All of my Unit Tests works if I comment/delete the $conn->setListener($obj_doctrine_profiler); line. This code block is located in my Bootstrap.php class; the weird thing is, the web application works just fine even with the mentioned code line. Thank you so much for your help. please excuse me if my english is not the best.

    Read the article

  • Unit test insert/update/delete

    - by Kurresmack
    Hey, I have googled this a little and didn't really find the answer I needed. I am working on a webpage in C# with MSSQL and LINQ for a customer. I want the users to be able to send messages to each other. So what I do is that I unit test this with data that actually goes into the database. The problem is that I now depend on having at least 2 users who I know the ID of. Furthermore I have to clean up after my self. This leads to rather large unit tests that test alot in one test. Lets say I would like to update a user. That would mean that I would have to ceate the user, update it, and then delete it. This a lot of assertions in one unit test and if it fails with updating i have to manually delete it. If I would do it any other way, without saving the data to DB, I would not for sure be able to know that the data was present in the database after updating etc. What is the proper way to do this without having a test that tests a lot of functuality in one test?

    Read the article

  • Where do you take mocking - immediate dependencies, or do you grow the boundaries...?

    - by Peter Mounce
    So, I'm reasonably new to both unit testing and mocking in C# and .NET; I'm using xUnit.net and Rhino Mocks respectively. I'm a convert, and I'm focussing on writing behaviour specifications, I guess, instead of being purely TDD. Bah, semantics; I want an automated safety net to work above, essentially. A thought struck me though. I get programming against interfaces, and the benefits as far as breaking apart dependencies goes there. Sold. However, in my behaviour verification suite (aka unit tests ;-) ), I'm asserting behaviour one interface at a time. As in, one implementation of an interface at a time, with all of its dependencies mocked out and expectations set up. The approach seems to be that if we verify that a class behaves as it should against its collaborating dependencies, and in turn relies on each of those collaborating dependencies to have signed that same quality contract, we're golden. Seems reasonable enough. Back to the thought, though. Is there any value in semi-integration tests, where a test-fixture is asserting against a unit of concrete implementations that are wired together, and we're testing its internal behaviour against mocked dependencies? I just re-read that and I think I could probably have worded it better. Obviously, there's going to be a certain amount of "well, if it adds value for you, keep doing it", I suppose - but has anyone else thought about doing that, and reaped benefits from it outweighing the costs?

    Read the article

  • Unit testing and mocking email sender in Python with Google AppEngine

    - by CVertex
    I'm a newbie to python and the app engine. I have this code that sends an email based on request params after some auth logic. in my Unit tests (i'm using GAEUnit), how do I confirm an email with specific contents were sent? - i.e. how do I mock the emailer with a fake emailer to verify send was called? class EmailHandler(webapp.RequestHandler): def bad_input(self): self.response.set_status(400) self.response.headers['Content-Type'] = 'text/plain' self.response.out.write("<html><body>bad input </body></html>") def get(self): to_addr = self.request.get("to") subj = self.request.get("subject") msg = self.request.get("body") if not mail.is_email_valid(to_addr): # Return an error message... # self.bad_input() pass # authenticate here message = mail.EmailMessage() message.sender = "[email protected]" message.to = to_addr message.subject = subj message.body = msg message.send() self.response.headers['Content-Type'] = 'text/plain' self.response.out.write("<html><body>success!</body></html>") And the unit tests, import unittest from webtest import TestApp from google.appengine.ext import webapp from email import EmailHandler class SendingEmails(unittest.TestCase): def setUp(self): self.application = webapp.WSGIApplication([('/', EmailHandler)], debug=True) def test_success(self): app = TestApp(self.application) response = app.get('http://localhost:8080/[email protected]&body=blah_blah_blah&subject=mySubject') self.assertEqual('200 OK', response.status) self.assertTrue('success' in response) # somehow, assert email was sent

    Read the article

  • Query on object id in VQL

    - by Banang
    I'm currently working with the versant object database (using jvi), and have a case where I need to query the database based on an object id. What I'm trying to achieve is something along the lines of Employee employee = new Employee("Mr. Pickles"); session.commit(); FundVQLQuery q = new FundVQLQuery(session, "select * from Employee employee where employee = $1"); q.bind(employee); q.execute(); However, I'm finding out the hard way (I get an EVJ_NOT_A_VALID_KEY_TYPE error thrown at me) that this is infact not the way to do it. Anyone got any experience in working with VQL? Your help is much apreciated! A small clarification: The problem is I'm running some performance tests on the database using the pole position framework, and one of the tests in that framework requires me to fetch an object from the database using either an object reference or a low level object id. Thus, I'm not allowed to reference specific fields in the employee object, but must perform the query on the object in its entirety. So, it's not allowed for me to go "select * from Employee e where e.id = 4", I need it to use the entire object. Accepted answer: Since there is some sort of lunacy magic built into SO that prevents me to mark an accepted answer after a bounty has run out, readers should know that the answer posted by Chris Holmes solves this issue. Readers are encouraged to up-vote his post to further signalize the correctness of his answer to any future readers of this thread.

    Read the article

  • How should I unit test a code-generator?

    - by jkp
    This is a difficult and open-ended question I know, but I thought I'd throw it to the floor and see if anyone had any interesting suggestions. I have developed a code-generator that takes our python interface to our C++ code (generated via SWIG) and generates code needed to expose this as WebServices. When I developed this code I did it using TDD, but I've found my tests to be brittle as hell. Because each test essentially wanted to verify that for a given bit of input code (which happens to be a C++ header) I'd get a given bit of outputted code I wrote a small engine that reads test definitions from XML input files and generates test cases from these expectations. The problem is I dread going in to modify the code at all. That and the fact that the unit tests themselves are a: complex, and b: brittle. So I'm trying to think of alternative approaches to this problem, and it strikes me I'm perhaps tackling it the wrong way. Maybe I need to focus more on the outcome, IE: does the code I generate actually run and do what I want it to, rather than, does the code look the way I want it to. Has anyone got any experiences of something similar to this they would care to share?

    Read the article

  • BackgroundWorker might be causing my application to hang

    - by alexD
    I have a Form that uses a BackgroundWorker to execute a series of tests. I use the ProgressChanged event to send messages to the main thread, which then does all of the updates on the UI. I've combed through my code to make sure I'm not doing anything to the UI in the background worker. There are no while loops in my code and the BackgroundWorker has a finite execution time (measured in seconds or minutes). However, for some reason when I lock my computer, often times the application will be hung when I log back in. The thing is, the BackgroundWorker isn't even running when this happens. The reason I believe it is related to the BackgroundWorker though is because the form only hangs when the BackgroundWorker has been executed since the application was loaded (it only runs when given a certain user input). I pass this thread a List of TreeNodes from a TreeView in my UI through the RunWorkerAsync method, but I only read those nodes in the worker thread..any modifications I make to them is done in the UI thread through the progressChanged event. I do use Thread.Sleep in my worker thread to execute tests at timed intervals (which involves sending messages over a TCP socket, which was not created in the worker thread). I am completely perplexed as to why my application might be hanging. I'm sure I'm doing something 'illegal' somewhere, I just don't know what.

    Read the article

  • How to test a struts 2.1.x developer?

    - by Jason Pyeron
    We employ test to filter out those who can't. The tests are designed to be very low effort for those who can and too much effort for those who can't. Here is an example for java web application developer on an Oracle project: We only work with contractors who can use the tools we use, to determine if you can use the tools we have devised some very simple tests. Instructions If you are prepared and knowledgeable this will take you about 2-5 minutes. Suggested knowledge and tools: * subversion 1.6 see http://subversion.tigris.org/ or http://cygwin.com/setup.exe * java 1.6 see http://java.sun.com/javase/downloads/index.jsp * oracle >=10g see http://www.oracle.com/technology/software/products/jdev/index.html * j2ee server see http://tomcat.apache.org/download-55.cgi or http://www.jboss.org/jbossas/downloads/ Steps 1. check out svn://statics32.pdinc.us/home/subversion/guest 2. deploy the war file found at trunk/test.war 3. browse to the web application you installed from the war file and answer the one SQL question: How many rows are in the table 'testdata' where column 'value' ends with either an 'A' or an 'a'? The login credentials are in trunk/doc/oracle.txt 4. make a RESULTS HASH by submitting your answer to the form. 5. create a file in tmp/YourUserName.txt and put the RESULTS HASH in it, not the answer. 6. check in your file (don't forget to add the file first). 7. message me with the revision number of your check in. As such I am looking for ideas on how to test for someone to be a struts 2.1 w/ annotations.

    Read the article

  • How do I launch background jobs w/ paramiko?

    - by sophacles
    Here is my scenario: I am trying to automate some tasks using Paramiko. The tasks need to be started in this order (using the notation (host, task)): (A, 1), (B, 2), (C, 2), (A,3), (B,3) -- essentially starting servers and clients for some testing in the correct order. Further, because in the tests networking may get mucked up, and because I need some of the output from the tests, I would like to just redirect output to a file. In similar scenarios the common response is to use 'screen -m -d' or to use 'nohup'. However with paramiko's exec_cmd, nohup doesn't actually exit. Using: bash -c -l nohup test_cmd & doesnt work either, exec_cmd still blocks to process end. In the screen case, output redirection doesn't work very well, (actually, doesnt work at all the best I can figure out). So, after all that explanation, my question is: is there an easy elegant way to detach processes and capture output in such a way as to end paramiko's exec_cmd blocking? Update The dtach command works nicely for this!

    Read the article

  • JW Player can not play m3u8 stream?

    - by why
    with check this example, http://developer.longtailvideo.com/player/branches/adaptive/test/provider.html , I tried the example myself, There is my code: <html> <head> <script type="text/javascript" src="jwplayer.js"></script> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <title>Provider tests</title> <style> body { padding: 50px; font: 13px/20px Arial; background: #EEE; } form { margin-top: 20px; } #player { -webkit-box-shadow: 0 0 5px #999; background: #000; } ul { margin-top: 40px; padding: 0 0 0 20px; list-style-type: square; } </style> </head> <body> Test M3U8 <div id="player">You need Flash to play these tests</div> <script type="text/javascript"> jwplayer("player").setup({ file: '../m3u8/index.m3u8', flashplayer: 'player.swf', provider:'adaptiveProvider.swf', height: 360, width: 640 }); function loadStream(url) { jwplayer("player").load({file: url,provider: 'adaptiveProvider.swf'}); jwplayer("player").play(); return false; } $(document).ready(function() { loadStream('http://localhost/m3u8/index.m3u8'); }); </script> <ul id="streamlist"></ul> <div id="panel"></div> </body> </html> But the Jw Play can not work BTW: my vlc can play http://localhost/m3u8/index.m3u8 well

    Read the article

  • Handling learning curve for new developers

    - by pete the pagan-gerbil
    Our company likes to hire new developers, with no experience. We have a core set of skills that we try to get them up to speed with, like ASP.NET and WinForms - to teach basic programming, the .NET languages, and the things they'll need to maintain and write. We also try and mentor them through early projects, so they can learn from someone more experienced. Recently, we've been seeing the benefits of new frameworks like MVC and ideas like Unit Testing and TDD (by extension, dependancy injection and IoC), and we'd like to start using these in the team. However, this increases the time that a junior would have before they can get started on a new project - because doing something like unit tests wrong could cause major headaches months or years later in maintenance, especially if we believe unit tests to be comprehensive. How do you handle the huge amount of things that a junior will need to take on, acknowledging that the business wants them working independantly as soon as possible? Is it acceptable to tell them not to unit test till a while after they are independant (and give them small, simpler projects in the meantime) before taking them to 'level 2' of the core skills?

    Read the article

  • eclipse, one classpath for compiling, another for launching

    - by DragonFax
    example: For logging, my code uses log4j. but other jars my code is dependent upon, uses slf4j instead. So both jars must be in the build path. Unfortunately, its possible for my code to directly use (depend on) slf4j now, either by context-assist, or some other developers changes. I would like any use of slf4j to show up as an error, but my application (and tests) will still need it in the classpath when running. explanation: I'd like to find out if this is possible in eclipse. This scenario happens often for me. I'll have a large project, that uses alot of 3rd party libraries. And of course those 3rd party jars have their own dependencies as well. So I have to include all dependencies in the classpath ("build path" in eclipse) for the application and its tests to compile and run (from within eclipse). But I don't want my code to use all of those jars, just the few direct dependencies I've decided upon myself. So if my code accidentally uses a dependency of a dependency, I want it to show up as a compilation error. Ideally, as class not found, but any error would do. I know I can manually configure the classpath when running outside of eclipse, and even within eclipse I can modify the classpath for a specific class I'm running (in the run configurations), but thats not manageable if you run alot of individual test cases, or have alot of main() classes.

    Read the article

  • How do I test database-related code with NUnit?

    - by Michael Haren
    I want to write unit tests with NUnit that hit the database. I'd like to have the database in a consistent state for each test. I thought transactions would allow me to "undo" each test so I searched around and found several articles from 2004-05 on the topic: http://weblogs.asp.net/rosherove/archive/2004/07/12/180189.aspx http://weblogs.asp.net/rosherove/archive/2004/10/05/238201.aspx http://davidhayden.com/blog/dave/archive/2004/07/12/365.aspx http://haacked.com/archive/2005/12/28/11377.aspx These seem to resolve around implementing a custom attribute for NUnit which builds in the ability to rollback DB operations after each test executes. That's great but... Does this functionality exists somewhere in NUnit natively? Has this technique been improved upon in the last 4 years? Is this still the best way to test database-related code? Edit: it's not that I want to test my DAL specifically, it's more that I want to test pieces of my code that interact with the database. For these tests to be "no-touch" and repeatable, it'd be awesome if I could reset the database after each one. Further, I want to ease this into an existing project that has no testing place at the moment. For that reason, I can't practically script up a database and data from scratch for each test.

    Read the article

  • Is there a way to prevent Maven Test from rebuilding the database?

    - by Christopher W. Allen-Poole
    I've recently been asked to, effectively, sell my department on unit testing. I can't tell you how excited this makes me, but I do have one concern. We're using JUnit with Spring and Maven, and this means that each time mvn test is called, it rebuilds the database. Obviously, we can't integrate that with our production server -- it would kill valuable data. How do I prevent the rebuilding without telling maven to skip testing? The best I could figure was to assign the script to operate in a test database (line breaks added for readability): mvn test -Ddbunit.schema=<database>test -Djdbc.url=jdbc:mysql://localhost/<database>test? createDatabaseIfNotExist=true&amp; useUnicode=true&amp;characterEncoding=utf-8 I can't help but think there must be a better way. I'm especially interested in learning if there is an easy way to tell Maven to only run tests on particular classes without building anything else? mvn -Dtest=<test-name> test still rebuilds the database. ======= update ======= Bit of egg on my face here. I didn't realize that I was using the same variable in two places, meaning that the POM was using a "skip.test" variable for both rebuilding the database and for running the tests...

    Read the article

  • using pom for test scope dependencies

    - by IttayD
    Hi, Is it possible to create a pom file so it can be used inside another pom to add test scope dependencies? So in module E's pom.xml I have: <dependencies> <dependency> <groupId>com.example</artifactId> <artifactId>D</artifactId> <type>pom</type> <scope>test</scope> </dependency> </dependencies> So that if D's pom.xml contains dependencies on artifacts A, B, C, then these artifacts are in the compilation and execution classpath of E's tests. NOTE: the reason I want such a pom, and not rely on regular dependency resolution is that I have created a tests jar using maven-jar-plugin:test-jar and using that jar as a dependency causes maven to not use its transitive dependencies. (see http://jira.codehaus.org/browse/MNG-1378) UPDATE: this does not work for me (maybe because I'm trying to use it for the test scope): http://www.sonatype.com/books/mvnref-book/reference/pom-relationships-sect-pom-best-practice.html

    Read the article

  • PHP DOMDocument getElementsByTagname??

    - by FFish
    This is driving me bonkers... I just want to add another img node. $xml = <<<XML <?xml version="1.0" encoding="UTF-8"?> <gallery> <album tnPath="tn/" lgPath="imm/" fsPath="iml/" > <img src="004.jpg" caption="4th caption" /> <img src="005.jpg" caption="5th caption" /> <img src="006.jpg" caption="6th caption" /> </album> </gallery> XML; $xmlDoc = new DOMDocument(); $xmlDoc->loadXML($xml); $album = $xmlDoc->getElementsByTagname('album')[0]; // Parse error: syntax error, unexpected '[' in /Applications/XAMPP/xamppfiles/htdocs/admin/tests/DOMDoc.php on line 17 $album = $xmlDoc->getElementsByTagname('album'); // Fatal error: Call to undefined method DOMNodeList::appendChild() in /Applications/XAMPP/xamppfiles/htdocs/admin/tests/DOMDoc.php on line 19 $newImg = $xmlDoc->createElement("img"); $album->appendChild($newImg); print $xmlDoc->saveXML(); Error:

    Read the article

  • Passing options in autospec with Cucumber in Ruby on Rails Development

    - by TK
    I always run autospec to run features and RSpec at the same time, but running all the features is often time-consuming on my local computer. I would run every feature before committing code. I would like to pass the argument in autospec command. autospec doesn't obviously doesn't accept the arguments directly. Here's the output of autospec -h: autotest [options] options: -h -help You're looking at it. -v Be verbose. Prints files that autotest doesn't know how to map to tests. -q Be more quiet. -f Fast start. Doesn't initially run tests at start. I do have a cucumber.yml in config directory. I also have rerun.txt in the Rails root directory. cucumber -h gives me a lot of information about arguments. How can I run autospec against features that are tagged as @wip? I think I can make use of config/cucumber.yml. There are profile definitions. I can run cucumber -p wip to run only @wip-tagged features, but I'd like to do this with autospec. I would appreciate any tips for working with many spec and feature files.

    Read the article

  • How can I execute a .sql from C#?

    - by J. Pablo Fernández
    For some integration tests I want to connect to the database and run a .sql file that has the schema needed for the tests to actually run, including GO statements. How can I execute the .sql file? (or is this totally the wrong way to go?) I've found a post in the MSDN forum showing this code: using System.Data.SqlClient; using System.IO; using Microsoft.SqlServer.Management.Common; using Microsoft.SqlServer.Management.Smo; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { string sqlConnectionString = "Data Source=(local);Initial Catalog=AdventureWorks;Integrated Security=True"; FileInfo file = new FileInfo("C:\\myscript.sql"); string script = file.OpenText().ReadToEnd(); SqlConnection conn = new SqlConnection(sqlConnectionString); Server server = new Server(new ServerConnection(conn)); server.ConnectionContext.ExecuteNonQuery(script); } } } but on the last line I'm getting this error: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. --- System.TypeInitializationException: The type initializer for '' threw an exception. --- .ModuleLoadException: The C++ module failed to load during appdomain initialization. --- System.DllNotFoundException: Unable to load DLL 'MSVCR80.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E). I was told to go and download that DLL from somewhere, but that sounds very hacky. Is there a cleaner way to? Is there another way to do it? What am I doing wrong? I'm doing this with Visual Studio 2008, SQL Server 2008, .Net 3.5SP1 and C# 3.0.

    Read the article

  • How do I see items embedded/attached to my Gallio test log in CruiseControl.NET?

    - by GraemeF
    I can use the following code to attach a log file to my Gallio 3.2 acceptance test report: TestLog.AttachPlainText("Attached log file", File.ReadAllText(path)); When I run my tests locally I can see the log file contents by viewing the report in my web browser. However, if I run the tests under CCNET 1.4.4 I get the following error when I try to browse to an attached file in the report via the web dashboard: Exception Message The attachment was not inlined into the XML report. Exception Full Details System.InvalidOperationException: The attachment was not inlined into the XML report. at CCNet.Gallio.WebDashboard.Plugin.GallioAttachmentBuildAction.CreateResponseFromAttachment(XPathNavigator attachmentNavigator) at CCNet.Gallio.WebDashboard.Plugin.GallioAttachmentBuildAction.Execute(ICruiseRequest cruiseRequest) at ThoughtWorks.CruiseControl.WebDashboard.MVC.Cruise.ServerCheckingProxyAction.Execute(ICruiseRequest cruiseRequest) at ThoughtWorks.CruiseControl.WebDashboard.MVC.Cruise.BuildCheckingProxyAction.Execute(ICruiseRequest cruiseRequest) at ThoughtWorks.CruiseControl.WebDashboard.MVC.Cruise.ProjectCheckingProxyAction.Execute(ICruiseRequest cruiseRequest) at ThoughtWorks.CruiseControl.WebDashboard.MVC.Cruise.CruiseActionProxyAction.Execute(IRequest request) at ThoughtWorks.CruiseControl.WebDashboard.MVC.Cruise.CachingActionProxy.Execute(IRequest request) at ThoughtWorks.CruiseControl.WebDashboard.MVC.Cruise.ExceptionCatchingActionProxy.Execute(IRequest request) I get similar results when I try to embed the file instead with: TestLog.EmbedPlainText("Embedded log file", File.ReadAllText(path)); Is it possible to get this working?

    Read the article

  • Is NUnit's ExpectedExceptionAttribute only way to test if something raises an exception?

    - by Dariusz Walczak
    Hello, I'm completely new at C# and NUnit. In Boost.Test there is a family of BOOST_*_THROW macros. In Python's test module there is TestCase.assertRaises method. As far as I understand it, in C# with NUnit (2.4.8) the only method of doing exception test is to use ExpectedExceptionAttribute. Why should I prefer ExpectedExceptionAttribute over - let's say - Boost.Test's approach? What reasoning can stand behind this design decision? Why is that better in case of C# and NUnit? Finally, if I decide to use ExpectedExceptionAttribute, how can I do some additional tests after exception was raised and catched? Let's say that I want to test requirement saying that object has to be valid after some setter raised System.IndexOutOfRangeException. How would you fix following code to compile and work as expected? [Test] public void TestSetterException() { Sth.SomeClass obj = new SomeClass(); // Following statement won't compile. Assert.Raises( "System.IndexOutOfRangeException", obj.SetValueAt( -1, "foo" ) ); Assert.IsTrue( obj.IsValid() ); } Edit: Thanks for your answers. Today, I've found an It's the Tests blog entry where all three methods described by you are mentioned (and one more minor variation). It's shame that I couldn't find it before :-(.

    Read the article

  • File Storage for Web Applications: Filesystem vs DB vs NoSQL engines

    - by El Yobo
    I have a web application that stores a lot of user generated files. Currently these are all stored on the server filesystem, which has several downsides for me. When we move "folders" (as defined by our application) we also have to move the files on disk (although this is more due to strange design decisions on the part of the original developers than a requirement of storing things on the filesystem). It's hard to write tests for file system actions; I have a mock filesystem class that logs actions like move, delete etc, without performing them, which more or less does the job, but I don't have 100% confidence in the tests. I will be adding some other jobs which need to access the files from other service to perform additional tasks (e.g. indexing in Solr, generating thumbnails, movie format conversion), so I need to get at the files remotely. Doing this over network shares seems dodgy... Dealing with permissions on the filesystem as sometimes given us problems in the past, although now that we've moved to a pure Linux environment this should be less of an issue. What are the downsides of storing files as BLOBs in MySQL? I guess that it would massively increase the database size and reduce the effectiveness of caches, but are there other problems? Do the same problems exist with NoSQL systems like Cassandra? Does anyone have any other suggestions that might be appropriate?

    Read the article

  • Managing logs/warnings in Python extensions

    - by Dimitri Tcaciuc
    TL;DR version: What do you use for configurable (and preferably captured) logging inside your C++ bits in a Python project? Details follow. Say you have a a few compiled .so modules that may need to do some error checking and warn user of (partially) incorrect data. Currently I'm having a pretty simplistic setup where I'm using logging framework from Python code and log4cxx library from C/C++. log4cxx log level is defined in a file (log4cxx.properties) and is currently fixed and I'm thinking how to make it more flexible. Couple of choices that I see: One way to control it would be to have a module-wide configuration call. # foo/__init__.py import sys from _foo import import bar, baz, configure_log configure_log(sys.stdout, WARNING) # tests/test_foo.py def test_foo(): # Maybe a custom context to change the logfile for # the module and restore it at the end. with CaptureLog(foo) as log: assert foo.bar() == 5 assert log.read() == "124.24 - foo - INFO - Bar returning 5" Have every compiled function that does logging accept optional log parameters. # foo.c int bar(PyObject* x, PyObject* logfile, PyObject* loglevel) { LoggerPtr logger = default_logger("foo"); if (logfile != Py_None) logger = file_logger(logfile, loglevel); ... } # tests/test_foo.py def test_foo(): with TemporaryFile() as logfile: assert foo.bar(logfile=logfile, loglevel=DEBUG) == 5 assert logfile.read() == "124.24 - foo - INFO - Bar returning 5" Some other way? Second one seems to be somewhat cleaner, but it requires function signature alteration (or using kwargs and parsing them). First one is.. probably somewhat awkward but sets up entire module in one go and removes logic from each individual function. What are your thoughts on this? I'm all ears to alternative solutions as well. Thanks,

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >