Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 159/192 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Customizing the TFS 2008 build sequence to avoid compilation and deploy SSRS

    - by Andrew
    I'm trying to create a CI process for SQL Server Reporting Services. I am fairly new to TFS but quite experienced with MSBuild. In the past I've used a combination of MSBuild with Team City so the whole build process is more or less custom. Here lies the start of my problems, as the solution I am deploying only contains Report Server projects (rds), no compilation is required. I thought that I would override the the first default task that TFS runs (EndToEndIteration) to override the default TFS build sequence and inject my own. The first snag that I have come across is that the build always fails, how can I set the status of the build to success? Currently the EndToEndIteration task is very light and only has a message. Is this the best method to create a custom build process in TFS where compilation is not required? Or should I use the default sequence and override one of the hook tasks mentioned in http://msdn.microsoft.com/en-us/library/aa337604%28VS.80%29.aspx (ie: AfterCompile) The core steps that I'd like to achieve are: Bundle the RDL and datasource files Connect to the host server to register/deploy the reports Re-apply any subscriptions that previously existed Run tests to verify the deployment succeeded and is returning results as expected I have found another article on Report services deployment: http://stackoverflow.com/questions/88710/reporting-services-deployment But it doesn't mention the best practice for customizing the standard build process. Any help would be appreciated.

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • Weird camera Intent behavior

    - by David Erosa
    Hi all. I'm invoking the MediaStore.ACTION_IMAGE_CAPTURE intent with the MediaStore.EXTRA_OUTPUT extra so that it does save the image to that file. On the onActivityResult I can check that the image is being saved in the intended file, which is correct. The weird thing is that anyhow, the image is also saved in a file named something like "/sdcard/Pictures/Camera/1298041488657.jpg" (epoch time in which the image was taken). I've checked the Camera app source (froyo-release branch) and I'm almost sure that the code path is correct and wouldn't have to save the image, but I'm a noob and I'm not completly sure. AFAIK, the image saving process starts with this callback (comments are mine): private final class JpegPictureCallback implements PictureCallback { ... public void onPictureTaken(...){ ... // This is where the image is passed back to the invoking activity. mImageCapture.storeImage(jpegData, camera, mLocation); ... public void storeImage(final byte[] data, android.hardware.Camera camera, Location loc) { if (!mIsImageCaptureIntent) { // Am i an intent? int degree = storeImage(data, loc); // THIS SHOULD NOT BE CALLED WITHIN THE CAPTURE INTENT!! ....... // An finally: private int storeImage(byte[] data, Location loc) { try { long dateTaken = System.currentTimeMillis(); String title = createName(dateTaken); String filename = title + ".jpg"; // Eureka, timestamp filename! ... So, I'm receiving the correct data, but it's also being saved in the "storeImage(data, loc);" method call, which should not be called... It'd not be a problem if I could get the newly created filename from the intent result data, but I can't. When I found this out, I found about 20 image files from my tests that I didn't know were on my sdcard :) I'm getting this behavior both with my Nexus One with Froyo and my Huawei U8110 with Eclair. Could please anyone enlight me? Thanks a lot.

    Read the article

  • JUnit Test method with randomized nature

    - by Peter
    Hey, I'm working on a small project for myself at the moment and I'm using it as an opportunity to get acquainted with unit testing and maintaining proper documentation. I have a Deck class with represents a deck of cards (it's very simple and, to be honest, I can be sure that it works without a unit test, but like I said I'm getting used to using unit tests) and it has a shuffle() method which changes the order of the cards in the deck. The implementation is very simple and will certainly work: public void shuffle() { Collections.shuffle(this.cards); } But, how could I implement a unit test for this method. My first thought was to check if the top card of the deck was different after calling shuffle() but there is of course the possibility that it would be the same. My second thought was to check if the entire order of cards has changed, but again they could possibly be in the same order. So, how could I write a test that ensures this method works in all cases? And, in general, how can you unit test methods for which the outcome depends on some randomness? Cheers, Pete

    Read the article

  • Am I mocking this helper function right in my Django test?

    - by CppLearner
    lib.py from django.core.urlresolvers import reverse def render_reverse(f, kwargs): """ kwargs is a dictionary, usually of the form {'args': [cbid]} """ return reverse(f, **kwargs) tests.py from lib import render_reverse, print_ls class LibTest(unittest.TestCase): def test_render_reverse_is_correct(self): #with patch('webclient.apps.codebundles.lib.reverse') as mock_reverse: with patch('django.core.urlresolvers.reverse') as mock_reverse: from lib import render_reverse mock_f = MagicMock(name='f', return_value='dummy_views') mock_kwargs = MagicMock(name='kwargs',return_value={'args':['123']}) mock_reverse.return_value = '/natrium/cb/details/123' response = render_reverse(mock_f(), mock_kwargs()) self.assertTrue('/natrium/cb/details/' in response) But instead, I get File "/var/lib/graphyte-webclient/graphyte-webenv/lib/python2.6/site-packages/django/core/urlresolvers.py", line 296, in reverse "arguments '%s' not found." % (lookup_view_s, args, kwargs)) NoReverseMatch: Reverse for 'dummy_readfile' with arguments '('123',)' and keyword arguments '{}' not found. Why is it calling reverse instead of my mock_reverse (it is looking up my urls.py!!) The author of Mock library Michael Foord did a video cast here (around 9:17), and in the example he passed the mock object request to the view function index. Furthermore, he patched POll and assigned an expected return value. Isn't that what I am doing here? I patched reverse? Thanks.

    Read the article

  • Optimize website for touch devices

    - by gregers
    On a touch device like iPhone/iPad/Android it can be difficult to hit a small button with your finger. There is no cross-browser way to detect touch devices with CSS media queries that I know of. So I check if the browser has support for javascript touch events. Until now, other browsers haven't supported them, but the latest Google Chrome on dev channel enabled touch events (even for non touch devices). And I suspect other browser makers will follow, since laptops with touch screens are comming. This is the test I use: function isTouchDevice() { try { document.createEvent("TouchEvent"); return true; } catch (e) { return false; } } The problem is that this only tests if the browser has support for touch events, not the device. Does anyone know of The Correct[tm] way of giving touch devices better user experience? Other than sniffing user agent. Mozilla has a media query for touch devices. But I haven't seen anything like it in any other browser: https://developer.mozilla.org/En/CSS/Media_queries#-moz-touch-enabled Update: I want to avoid using a separate page/site for mobile/touch devices. The solution has to detect touch devices with object detection or similar from JavaScript, or include a custom touch-CSS without user agent sniffing! The main reason I asked, was to make sure it's not possible today, before I contact the css3 working group.

    Read the article

  • High Runtime for Dictionary.Add for a large amount of items

    - by aaginor
    Hi folks, I have a C#-Application that stores data from a TextFile in a Dictionary-Object. The amount of data to be stored can be rather large, so it takes a lot of time inserting the entries. With many items in the Dictionary it gets even worse, because of the resizing of internal array, that stores the data for the Dictionary. So I initialized the Dictionary with the amount of items that will be added, but this has no impact on speed. Here is my function: private Dictionary<IdPair, Edge> AddEdgesToExistingNodes(HashSet<NodeConnection> connections) { Dictionary<IdPair, Edge> resultSet = new Dictionary<IdPair, Edge>(connections.Count); foreach (NodeConnection con in connections) { ... resultSet.Add(nodeIdPair, newEdge); } return resultSet; } In my tests, I insert ~300k items. I checked the running time with ANTS Performance Profiler and found, that the Average time for resultSet.Add(...) doesn't change when I initialize the Dictionary with the needed size. It is the same as when I initialize the Dictionary with new Dictionary(); (about 0.256 ms on average for each Add). This is definitely caused by the amount of data in the Dictionary (ALTHOUGH I initialized it with the desired size). For the first 20k items, the average time for Add is 0.03 ms for each item. Any idea, how to make the add-operation faster? Thanks in advance, Frank

    Read the article

  • Visual C++ overrides/mock objects for unit testing?

    - by Mark
    When I'm running unit tests, I want to be able to "stub out" or create a mock object, but I'm running into DLL Hell. For example: There are two DLL libraries built: A.dll and B.dll -- Classes in A.dll have calls to classes in B.dll so when A.dll was built, the link line was using B.lib for the defintions. My test driver (Foo.exe) is testing classes in A.dll, so it links against A.lib. However, I want to "stub out" some of the calls A.dll makes to B.dll with simple versions (return basic value, no DB look up, etc). I can't build an Override.dll that just overrides the needed methods (not entire classes) and replace B.dll because Foo.exe will A) complain that B.dll is missing if I just remove it and put Override.dll in it's place or B) if I rename Override.dll to B.dll, Foo.exe complains that there are unresolved symbols because Override.dll is not a complete implementation of B.dll. Is there a way to do this? Is there a way to statically link Foo.exe with A.lib, B.lib and Override.lib such that it will work without having to completely rebuild A.lib and B.lib to remove the __delcspec(dllexport)? Is there another option?

    Read the article

  • Ninject: Shared DI/IoC container

    - by joblot
    Hi I want to share the container across various layers in my application. I started creating a static class which initialises the container and register types in the container. public class GeneralDIModule : NinjectModule { public override void Load() { Bind().To().InSingletonScope(); } } public abstract class IoC { private static IKernel _container; public static void Initialize() { _container = new StandardKernel(new GeneralDIModule(), new ViewModelDIModule()); } public static T Get<T>() { return _container.Get<T>(); } } I noticed there is a Resolve method as well. What is the difference between Resolve and Get? In my unit tests I don’t always want every registered type in my container. Is there a way of initializing an empty container and then register types I need. I’ll be mocking types as well in unit test so I’ll have to register them as well. There is an Inject method, but it says lifecycle of instance is not managed? Could someone please set me in right way? How can I register, unregister objects and reset the container. Thanks

    Read the article

  • MS Word opens documents hosted on WebDav share read-only on Windows Vista and 7 but only if no other

    - by rjmunro
    We have a WebDav server with some Word documents on it. (We are using PHP's HTTP_WebDAV_Server but get the same issue on tests with Apache mod_dav - both use digest authentication, basic auth doesn't work on Vista or later) We have a web page that opens the word documents using javascript like: Doc = new ActiveXObject("Sharepoint.OpenDocuments.3"); Doc.EditDocument(url, 'Word.Document'); which causes word to connect to the webdav server and open the document, bypassing IE and most of windows built in WebDav client. On Windows XP, this works perfectly, and (after prompting you to log in) allows you to edit the word document and save it back to the server. On Windows 7 and Windows Vista, this usually opens the document read only, but not in all cases. After quite a bit of trial and error, we found that it worked (i.e. opened read/write) if Explorer happened to be already connected to a WebDav server. Note that this works with any Webdav server, not neccesarily the one with the document that you are trying to edit. So other than telling our users to change settings on their machine, is there anything we can do in the javascript sharepoint call, or on the WebDav server that will fix this issue. Ps. We have the same problem when launching Word from an HTA file version of our system, with javascript like: wordApp = new ActiveXObject("Word.application"); wordApp = new ActiveXObject("Word.application"); wordApp.Visible = true; doc = wordApp.Documents.Open(url); Pps. Sorry if you think this question should be on Serverfault (or even SuperUser). I couldn't decide, but because we are programming the WebDav server ourself (in PHP) and I have more rep on this site than the others, I decided to post it here :-)

    Read the article

  • Code Coverage tool for BlackBerry

    - by Skrud
    I'm looking for a code coverage tool that I can use with a BlackBerry application. I'm using J2ME-Unit for Unit Testing and I want to see how much of my code is being covered by my tests. I've tried using Cobertura for J2ME but after days of wrestling with it I failed to get any results from it. (I believe that the instrumentation is un-done by the RAPC compilation). And despite this message, the project seems to be dead. I've looked at JInjector but the project seems very incomplete. There is little (if any) documentation and although it claims to be able to work with BlackBerry projects, I haven't seen any places where it has been used for that purpose. I've played with the project quite a bit but to no avail. I've also tried the "Coverage" view in the BlackBerry JDE, even though I use Eclipse for development. The view stays permanently blank, regardless of clicking "Refresh" and running the application from the JDE. I've looked at most of the tools on this SO thread, but they won't work with J2ME/BlackBerry projects. Has anyone had any success with any code coverage tools on the BlackBerry? If so, what tools have you used? How have you used them? If anyone has managed to get JInjector or Cobertura for J2ME to work with a BlackBerry project, what did you have to do to get it working?

    Read the article

  • Spring: Bean fails to read off values from external Properties file when using @Value annotation

    - by daydreamer
    XML Configuration <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd"> <util:properties id="mongoProperties" location="file:///storage/local.properties" /> <bean id="mongoService" class="com.business.persist.MongoService"></bean> </beans> and MongoService looks like @Service public class MongoService { @Value("#{mongoProperties[host]}") private String host; @Value("#{mongoProperties[port]}") private int port; @Value("#{mongoProperties[database]}") private String database; private Mongo mongo; private static final Logger LOGGER = LoggerFactory.getLogger(MongoService.class); public MongoService() throws UnknownHostException { LOGGER.info("host=" + host + ", port=" + port + ", database=" + database); mongo = new Mongo(host, port); } public void putDocument(@Nonnull final DBObject document) { LOGGER.info("inserting document - " + document.toString()); mongo.getDB(database).getCollection(getCollectionName(document)).insert(document, WriteConcern.SAFE); } I write my MongoServiceTest as public class MongoServiceTest { @Autowired private MongoService mongoService; public MongoServiceTest() throws UnknownHostException { mongoRule = new MongoRule(); } @Test public void testMongoService() { final DBObject document = DBContract.getUniqueQuery("001"); document.put(DBContract.RVARIABLES, "values"); document.put(DBContract.PVARIABLES, "values"); mongoService.putDocument(document); } and I see failures in tests as 12:37:25.224 [main] INFO c.s.business.persist.MongoService - host=null, port=0, database=null java.lang.NullPointerException at com.business.persist.MongoServiceTest.testMongoService(MongoServiceTest.java:40) Which means bean was not able to read the values from local.properties local.properties ### === MongoDB interaction === ### host="127.0.0.1" port=27017 database=contract How do I fix this? update It doesn't seem to read off the values even after creating setters/getters for the fields. I am really clueless now. How can I even debug this issue? Thanks much!

    Read the article

  • How can I write a unit test to determine whether an object can be garbage collected?

    - by driis
    In relation to my previous question, I need to check whether a component that will be instantiated by Castle Windsor, can be garbage collected after my code has finished using it. I have tried the suggestion in the answers from the previous question, but it does not seem to work as expected, at least for my code. So I would like to write a unit test that tests whether a specific object instance can be garbage collected after some of my code has run. Is that possible to do in a reliable way ? EDIT I currently have the following test based on Paul Stovell's answer, which succeeds: [TestMethod] public void ReleaseTest() { WindsorContainer container = new WindsorContainer(); container.Kernel.ReleasePolicy = new NoTrackingReleasePolicy(); container.AddComponentWithLifestyle<ReleaseTester>(LifestyleType.Transient); Assert.AreEqual(0, ReleaseTester.refCount); var weakRef = new WeakReference(container.Resolve<ReleaseTester>()); Assert.AreEqual(1, ReleaseTester.refCount); GC.Collect(); GC.WaitForPendingFinalizers(); Assert.AreEqual(0, ReleaseTester.refCount, "Component not released"); } private class ReleaseTester { public static int refCount = 0; public ReleaseTester() { refCount++; } ~ReleaseTester() { refCount--; } } Am I right assuming that, based on the test above, I can conclude that Windsor will not leak memory when using the NoTrackingReleasePolicy ?

    Read the article

  • How can this code throw a NullReferenceException?

    - by Davy8
    I must be going out of my mind, because my unit tests are failing because the following code is throwing a null ref exception: int pid = 0; if (parentCategory != null) { Console.WriteLine(parentCategory.Id); pid = parentCategory.Id; } The line that's throwing it is: pid = parentCategory.Id; The console.writeline is just to debug in the NUnit GUI, but that outputs a valid int. Edit: It's single-threaded so it can't be assigned to null from some other thread, and the fact the the Console.WriteLine successfully prints out the value shows that it shouldn't throw. Edit: Relevant snippets of the Category class: public class Category { private readonly int id; public Category(Category parent, int id) { Parent = parent; parent.PerformIfNotNull(() => parent.subcategories.AddIfNew(this)); Name = string.Empty; this.id = id; } public int Id { get { return id; } } } Well if anyone wants to look at the full code, it's on Google Code at http://code.google.com/p/chefbook/source/checkout I think I'm going to try rebooting the computer... I've seen pretty weird things fixed by a reboot. Will update after reboot. Update: Mystery solved. Looks like NUnit shows the error line as the last successfully executed statement... Copy/pasted test into new console app and ran in VS showed that it was the line after the if statement block (not shown) that contained a null ref. Thanks for all the ideas everyone. +1 to everyone that answered.

    Read the article

  • Problem: writing parameter values to data driven MSTEST output

    - by Shubh
    Hi, I am trying to extract some information about the parameter variants used in an MSTEST data driven test case from trx file. Currently, For data driven tests, I get the output of same testcase with different inputs as a sequence of tags , but there is no info about the value of the variants. Example: Suppose we have a [data driven]TestMethod1() and the data rows contain variations a and b. There are two variations a=1,b=2 for which the test passes and a=3,b=4 for which the test fails. If we can output the info that it was a=1,b=2 which passed and a=3 b=4 which failed in the trx file; the output will be meaningful. Better information about test case runs from the output file alone(without any dependencies). Investigating the test failure without rerunning the whole set If the data rows change in data source(now a=1,b=2 pass and a=5,b=6 fail) , easy to decipher that the errors are different; although the fail sequence is still the same(row 0 pass row 1 fail but now row1 is different) Has any of you gone through a similar problem? What did you follow? I tried to put the parameter value information in the Description attribute of TestMethod, it didnt work. Any other methods you think can work too? thanks, Shubhankar

    Read the article

  • How to convert many thousands of lines of VBScript to C#?

    - by Ross Patterson
    I have a collection of about 10,000 small VBScript programs (50-100 lines each) and a small collection of larger ones, and I'm looking for a way to convert them to C# without resorting to by-hand transliteration. The programs are automated test cases for a web application, written for HP/Mercury's QuickTest Pro, and I'm trying to turn them into test cases for Selenium. Luckily, the tests appear to be well-written, using a library of building blocks and idioms (the larger programs), so the test cases actually resemble a domain-specific language more than they do VBScript, and the QTP-ness is well-buried inside the libraries. Ideally, what I'm searching for is a tool that can do the syntactic transformation from VBScript to C# for both the dsl-ish test cases and also the more complicated building-block libraries. That would leave me with a manual cleanup of the libraries, and probably very little work on the test cases. If I could find a VBScript-to-VB.NET translator, I'd take that also, as I suspect I could compile the VB.NET and then de-compile to C# using .NET Relector or something similar. Plan B is to write a translator of my own for the test cases, since they're in a very straight-line style, but it wouldn't help with the libraries. Any suyggestions? I haven't written a compiler in at least 15 years, and while I haven't forgotten how, I'm not looking forward to it - least of all for VBScript!

    Read the article

  • OleDB connection only working when debugging

    - by Francesc
    I have a C# application that connects to a named SQL Express instance on the local machine using OleDBConnection: _connection = new OleDbConnection(_strConn); _connection.Open(); _strConn is something like this: "Provider=sqloledb;Data Source=.\NAMEDINSTANCE;Initial Catalog=dbname;User Id=sa;Password=password;" If I debug the application, the connection works fine. If I run the application from Windows Explorer (the same debug compilation), I get an "OleDBException: Login timeout expired" in the Open() line after 30 seconds. The strange thing is the exception happens even if I attach the debugger to the exe. I can see that the connection string is correct and everything seems fine. I can't fine any extra information in the SQL Express error log or SQL Activity Monitor either. If it helps, here is the exception: System.Data.OleDb.OleDbException: Login timeout expired at System.Data.OleDb.OleDbConnectionInternal..ctor(OleDbConnectionString constr, OleDbConnection connection) at System.Data.OleDb.OleDbConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OleDb.OleDbConnection.Open() I imagine that find the issue with the information I give here might be difficult, but I don't know where else to look or what other tests to do, so any ideas on what it could be or what test I could do to find out will be really appreciated.

    Read the article

  • Is there a supplementary guide/answer key for ruby koans?

    - by corroded
    I have recently tried sharpening my rails skills with this tool: http://github.com/edgecase/ruby_koans but I am having trouble passing some tests. Also I am not sure if I'm doing some things correctly since the objective is just to pass the test, there are a lot of ways in passing it and I may be doing something that isn't up to standards. Is there a way to confirm if I'm doing things right? a specific example: in about_nil, def test_nil_is_an_object assert_equal __, nil.is_a?(Object), "Unlike NULL in other languages" end so is it telling me to check if that second clause is equal to an object(so i can say nil is an object) or just put assert_equal true, nil.is_a?(Object) because the statement is true? and the next test: def test_you_dont_get_null_pointer_errors_when_calling_methods_on_nil # What happens when you call a method that doesn't exist. The # following begin/rescue/end code block captures the exception and # make some assertions about it. begin nil.some_method_nil_doesnt_know_about rescue Exception => ex # What exception has been caught? assert_equal __, ex.class # What message was attached to the exception? # (HINT: replace __ with part of the error message.) assert_match(/__/, ex.message) end end Im guessing I should put a "No method error" string in the assert_match, but what about the assert_equal?

    Read the article

  • How to test method call order with Moq

    - by Finglas
    At the moment I have: [Test] public void DrawDrawsAllScreensInTheReverseOrderOfTheStack() { // Arrange. var screenMockOne = new Mock<IScreen>(); var screenMockTwo = new Mock<IScreen>(); var screens = new List<IScreen>(); screens.Add(screenMockOne.Object); screens.Add(screenMockTwo.Object); var stackOfScreensMock = new Mock<IScreenStack>(); stackOfScreensMock.Setup(s => s.ToArray()).Returns(screens.ToArray()); var screenManager = new ScreenManager(stackOfScreensMock.Object); // Act. screenManager.Draw(new Mock<GameTime>().Object); // Assert. screenMockOne.Verify(smo => smo.Draw(It.IsAny<GameTime>()), Times.Once(), "Draw was not called on screen mock one"); screenMockTwo.Verify(smo => smo.Draw(It.IsAny<GameTime>()), Times.Once(), "Draw was not called on screen mock two"); } But the order in which I draw my objects in the production code does not matter. I could do one first, or two it doesn't matter. However it should matter as the draw order is important. How do you (using Moq) ensure methods are called in a certain order? Edit I got rid of that test. The draw method has been removed from my unit tests. I'll just have to manually test it works. The reversing of the order though was taken into a seperate test class where it was tested so it's not all bad. Thanks for the link about the feature they are looking into. I sure hope it gets added soon, very handy.

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • rspec and ruby 1.9.1: problem with dummy controller and routes

    - by giorgian
    I want to test a module that basically executes some verify statements, to ensure that actions are invoked with the correct method. # /lib/rest_verification.rb module RestVerification def self.included(base) # :nodoc: base.extend(ClassMethods) end module ClassMethods def verify_rest_actions verify :method => :post, :only => [:create], :redirect_to => { :action => :new } ... end end end I tried this: describe RestVerification do class FooController < ActionController::Base include RestVerification verify_rest_actions def new ; end def index ; end def create ; end def edit ; end def update ; end def destroy ; end end # controller_name 'foo' # this only works with ruby 1.8.7 : 1.9.1 says "uninitialized constant FooController" tests FooController # this works with both before(:each) do ActionController::Routing::Routes.draw do |map| map.resources :foo end end after(:each) do ActionController::Routing::Routes.reload! end it ':create should redirect to :new if invoked with wrong verb' do [:get, :put, :delete].each do |verb| send verb, :create response.should redirect_to(new_foo_url) end end ... end When testing: $ ruby -v ruby 1.8.7 (2010-01-10 patchlevel 249) [i486-linux] $ rake RestVerification :create should redirect to :new if invoked with wrong verb Finished in 0.175586 seconds $ rvm use 1.9.1 Using ruby 1.9.1 p378 $ rake RestVerification :create should redirect to :new if invoked with wrong verb (FAILED - 1) 1) 'RestVerification :create should redirect to :new if invoked with wrong verb' FAILED expected redirect to "http://test.host/foo/new", got redirect to "http://test.host/spec/rails/example/controller_example_group/subclass_1/foo/new" Is this a known issue? Is there a workaround?

    Read the article

  • Maven Cobertura: unit test failed but build success

    - by Pavel Drobushevich
    Hi all, I've configured cobertura code coverage in my pom: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <version>2.4</version> <configuration> <instrumentation> <excludes> <exclude>**/*Exception.class</exclude> </excludes> </instrumentation> <formats> <format>xml</format> <format>html</format> </formats> </configuration> </plugin> And start test by following command: mvn clean cobertura:cobertura But if one of unit test fail Cobertura only log this information and doesn't mark build fail. Tests run: 287, Failures: 1, Errors: 1, Skipped: 0 Flushing results... Flushing results done Cobertura: Loaded information on 139 classes. Cobertura: Saved information on 139 classes. [ERROR] There are test failures. ................................. [INFO] BUILD SUCCESS How to configure Cobertura marks build failed in one of unit test fail? Thanks in advance.

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • IIS6, ASP.NET MVC 1 and random slowdowns

    - by Mr Snuffle
    I've recently deployed a MVC application to an IIS6 web server. One strange behaviour I've been having is the load times will randomly blow up to 30sec+ and then return to normal. Our tests have shown this occurring on multiple connections at the same time. Once the wait has passed, the site become responsive again. It's completely random when this will occur, but will probably happen about once every 15 minutes or so. My first thought was the application was being restarted by the web server for some reason, but I determined this wasn't the case because the process recycling is set very infrequently, and I placed some logging in the application startup. It's also nothing to do with the database connection. This slowdown happens simply by moving between static pages too. I've watched the database with a SQL profiler, and nothing is hitting it when these slowdowns occur. Finally, I've placed entry and exit logging on my controller actions, the slowdown always happens outside of the controller. The entry and exit time for a controller action is always appropriately fast. Does anyone have any ideas of what could be causing this? I've tried running it locally on IIS7 and I haven't had the issue. I can only think it's something to do with our hosting provider.

    Read the article

  • Using jQuery with Windows 8 Metro JavaScript App causes security error

    - by patridge
    Since it sounded like jQuery was an option for Metro JavaScript apps, I was starting to look forward to Windows 8 dev. I installed Visual Studio 2012 Express RC and started a new project (both empty and grid templates have the same problem). I made a local copy of jQuery 1.7.2 and added it as a script reference. <!-- SomeTestApp references --> <link href="/css/default.css" rel="stylesheet" /> <script src="/js/jquery-1.7.2.js"></script> <script src="/js/default.js"></script> Unfortunately, as soon as I ran the resulting app it tosses out a console error: HTML1701: Unable to add dynamic content ' a' A script attempted to inject dynamic content, or elements previously modified dynamically, that might be unsafe. For example, using the innerHTML property to add script or malformed HTML will generate this exception. Use the toStaticHTML method to filter dynamic content, or explicitly create elements and attributes with a method such as createElement. For more information, see http://go.microsoft.com/fwlink/?LinkID=247104. I slapped a breakpoint in a non-minified version of jQuery and found the offending line: div.innerHTML = " <link/><table></table><a href='/a' style='top:1px;float:left;opacity:.55;'>a</a><input type='checkbox'/>"; Apparently, the security model for Metro apps forbids creating elements this way. This error doesn't cause any immediate issues for the user, but given its location, I am worried it will cause capability-discovery tests in jQuery to fail that shouldn't. I definitely want jQuery $.Deferred for making just about everything easier. I would prefer to be able to use the selector engine and event handling systems, but I would live without them if I had to. How does one get the latest jQuery to play nicely with Metro development?

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >