Search Results

Search found 12476 results on 500 pages for 'unit testing'.

Page 65/500 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Must all new features go through betatest?

    - by LTR
    Obviously, small usability fixes and bugfixes go directly into the stable product. What about small new features? Can you afford to just release them after internal testing, or do they have to be betatested by customers first? Situation: This is a young commercial project, produced by a one-person company. It has an existing userbase and is at it's second major version. Previous betatests have produced some results, however most feedback came from the stable product and not from beta versions.

    Read the article

  • Need to test .properties one by one in every possibility?

    - by ??? Shengyuan Lu
    For example, there are some key-value configuration in .properties file. Such like someFeatureEnable=true. It must be bool type value which will be parsed by framework, in my case it's typical Java Spring configuration. Spring will handle the configuration and throw Exception when users set someFeatureEnable=123. My question is: if there many properties in .properties file, Is it worth testing them one by one? It's quite troublesome and low priority. The .properties file is always configured by tech administrator stuff. Limited chances that they will mess up the configuration. Thanks!

    Read the article

  • Visual Studio Load Tests Virtual Users Simulation

    - by Eldar
    Hello, I'm currently working on writing a load testing application that takes advantage of Load Test using Visual Studio 2010. The load test will simulate 20 users on the same machine, and I need some data to be shared in-memory between all simulated users. I was suprised I couldn't find documentation answering the following question: What seperates each virtual user's running context from the other? Does each virtual user runs the tests in its own process? Maybe in its own app domain? Or just on its own thread? I need to know because if each user is running tests in its own process then all the in-memory cache isn't shared and is created for each user instead of one time for all of them, which is bad for me.

    Read the article

  • What Does It Usually Mean for a Feature to be "Supported"?

    - by joshin4colours
    I'm currently working some testing for a particular area of an application. I had to write some automated tests for a particular feature but due to the circumstances, this was not easy to do. When I asked one of the other testers about it, he mentioned that the same features exist in a sister application our company produces but isn't documented anywhere (end-user documentation or otherwise). He also said that the feature doesn't typically get tested at all in the sister application and isn't usually tested in the application I work on. Apparently this feature isn't heavily used but removing it would require a fair bit of work so the benefit-cost ratio doesn't work out. All of this has left me with some questions. Other than "The documentation says so" or "We told the client it is", what usually makes a feature "supported" versus an unsupported feature?

    Read the article

  • Is asking for control totals on a file an outdated means of verifying a file?

    - by CTKeane
    I'm in a new position where I need to process a flat files on a regular basis. The last time I did this was 5 or 6 years ago but as part of the file layout I received control totals. It gave me simplistic information on the file like the total number of records as well as sums of the important fields. This helped me during testing then also during production to verify the file arrived and has correct information. I have asked for similar data for this new project and have hit a wall of no. Is this no longer a standard practice? Is there a better way?

    Read the article

  • Handling learning curve for new developers

    - by pete the pagan-gerbil
    Our company likes to hire new developers, with no experience. We have a core set of skills that we try to get them up to speed with, like ASP.NET and WinForms - to teach basic programming, the .NET languages, and the things they'll need to maintain and write. We also try and mentor them through early projects, so they can learn from someone more experienced. Recently, we've been seeing the benefits of new frameworks like MVC and ideas like Unit Testing and TDD (by extension, dependancy injection and IoC), and we'd like to start using these in the team. However, this increases the time that a junior would have before they can get started on a new project - because doing something like unit tests wrong could cause major headaches months or years later in maintenance, especially if we believe unit tests to be comprehensive. How do you handle the huge amount of things that a junior will need to take on, acknowledging that the business wants them working independantly as soon as possible? Is it acceptable to tell them not to unit test till a while after they are independant (and give them small, simpler projects in the meantime) before taking them to 'level 2' of the core skills?

    Read the article

  • Is JPA persistence.xml classpath located?

    - by Vinnie
    Here's what I'm trying to do. I'm using JPA persistence in a web application, but I have a set of unit tests that I want to run outside of a container. I have my primary persistence.xml in the META_INF folder of my main app and it works great in the container (Glassfish). I placed a second persistence.xml in the META-INF folder of my test-classes directory. This contains a separate persistence unit that I want to use for test only. In eclipse, I placed this folder higher in the classpath than the default folder and it seems to work. Now when I run the maven build directly from the command line and it attempts to run the unit tests, the persistence.xml override is ignored. I can see the override in the META-INF folder of the maven generated test-classes directory and I expected the maven tests to use this file, but it isn't. My Spring test configuration overrides, achieved in a similar fashion are working. I'm confused at to whether the persistence.xml is located through the classpath. If it were, my override should work like the spring override since the maven surefire plugin explains "[The test class directory] will be included at the beginning the test classpath". Did I wrongly anticipate how the persistence.xml file is located? I could (and have) create a second persistence unit in the production persistence.xml file, but it feels dirty to place test configuration into this production file. Any other ideas on how to achieve my goal is welcome.

    Read the article

  • Unit test project doesn't recognize the classes it was generated from

    - by DougLeary
    I have a fairly simple file-system website consisting of one aspx page and several classes in separate .cs files. Everything is on my own HD. The web app itself builds and runs fine. Out of curiosity I decided to try out Visual Studio's nifty, easy-to-use unit test feature. So I opened each class file and clicked Create Unit Tests. VS generated a test project containing a set of test classes and some other files. Easy! But when I try to build or run the test project it throws a series of build errors, one for every class: The type or namespace name 'class-name' could not be found (are you missing a using directive or an assembly reference?). Somebody asked if my test project has a reference to the original project. Well no, because the original project is a file-system website. It has no bin folder and no DLL, so there's nothing to reference as far as I can tell. I would think that since VS generated these unit tests it would generate whatever references it needs, but apparently not. Is generating unit tests for file-system web apps an undocumented no-no, or is there a magic trick to getting it to work?

    Read the article

  • How do I change the base class at runtime in C#?

    - by MatthewMartin
    I may be working on mission impossible here, but I seem to be getting close. I want to extend a ASP.NET control, and I want my code to be unit testable. Also, I'd like to be able to fake behaviors of a real Label (namely things like ID generation, etc), which a real Label can't do in an nUnit host. Here a working example that makes assertions on something that depends on a real base class and something that doesn't-- in a more realistic unit test, the test would depend on both --i.e. an ID existing and some custom behavior. Anyhow the code says it better than I can: public class LabelWrapper : Label //Runtime //public class LabelWrapper : FakeLabel //Unit Test time { private readonly LabelLogic logic= new LabelLogic(); public override string Text { get { return logic.ProcessGetText(base.Text); } set { base.Text=logic.ProcessSetText(value); } } } //Ugh, now I have to test FakeLabelWrapper public class FakeLabelWrapper : FakeLabel //Unit Test time { private readonly LabelLogic logic= new LabelLogic(); public override string Text { get { return logic.ProcessGetText(base.Text); } set { base.Text=logic.ProcessSetText(value); } } } [TestFixture] public class UnitTest { [Test] public void Test() { //Wish this was LabelWrapper label = new LabelWrapper(new FakeBase()) LabelWrapper label = new LabelWrapper(); //FakeLabelWrapper label = new FakeLabelWrapper(); label.Text = "ToUpper"; Assert.AreEqual("TOUPPER",label.Text); StringWriter stringWriter = new StringWriter(); HtmlTextWriter writer = new HtmlTextWriter(stringWriter); label.RenderControl(writer); Assert.AreEqual(1,label.ID); Assert.AreEqual("<span>TOUPPER</span>", stringWriter.ToString()); } } public class FakeLabel { virtual public string Text { get; set; } public void RenderControl(TextWriter writer) { writer.Write("<span>" + Text + "</span>"); } } //System Under Test internal class LabelLogic { internal string ProcessGetText(string value) { return value.ToUpper(); } internal string ProcessSetText(string value) { return value.ToUpper(); } }

    Read the article

  • Rails 4 testing bug?

    - by Jamato
    Situation: if we add two identic line items into a cart, we update line item quantity instead of adding a duplicate.In browser everything works fine but in unit testing section something fails because of an empty cycle in code. Which I wanted to use to update all prices. Why? Is that a unit test engine bug? LineItem.all and cart.line_items in process of testing produce two DIFFERENT structures. #<LineItem id: 980190964, product_id: 1, cart_id: 999, created_at: "2014-06-01 00:21:28", updated_at: "2014-06-01 00:21:28", quantity: 2, price: #<BigDecimal:ba0fb544,'0.4E1',9(27)>> #<LineItem id: 980190964, product_id: 1, cart_id: 999, created_at: "2014-06-01 00:21:28", updated_at: "2014-06-01 00:21:28", quantity: 1, price: #<BigDecimal:ba0d1b04,'0.4E1',9(27)>> cart.line_items guy did not update quantity Code itself (produces LineItem which is then saved in line_item_controller which calls this method) class Cart < ActiveRecord::Base has_many :line_items, dependent: :destroy def add_product(product_id) # LOOK THIS CYCLE BREAKS UNIT TEST, SRSLY, I MEAN IT line_items.each do |item| end current_item = line_items.find_by(product_id: product_id) fresh_price = Product.find_by(id: product_id).price if current_item current_item.quantity += 1 else current_item = line_items.build(product_id: product_id, price: fresh_price) end return current_item end ... Unit test code test "non-unique item added" do cart = Cart.new(:id => 999) line_item0 = cart.add_product(2) line_item0.save line_item1 = cart.add_product(1) line_item1.save assert_equal 2, cart.line_items.size #success line_item2 = cart.add_product(1) line_item2.save assert_equal 2, cart.line_items.size, "what?" assert cart.total_price > 15 #fail, prices are not enough, quantity of product1 = 1 #we get total price from quantity, it's a simple method in model end And once again: IT DOES WORK in browser as it should. Even with cycle. I feel so dumb right now...

    Read the article

  • Should mock objects for tests be created at a high or low level

    - by Danack
    When creating unit tests for those other objects, what is the best way to create mock objects that provide data to other objects. Should they be created at a 'high level' and intercept the calls as soon as possible, or should they be done at a 'low level' and so make as much as the real code still be called? e.g. I'm writing a test for some code that requires a NoteMapper object that allows Notes to be loaded from the DB. class NoteMapper { function getNote($sqlQueryFactory, $noteID) { // Create an SQL query from $sqlQueryFactory // Run that SQL // if null // return null // else // return new Note($dataFromSQLQuery) } } I could either mock this object at a high level by creating a mock NoteMapper object, so that there are no calls to the SQL at all e.g. class MockNoteMapper { function getNote($sqlQueryFactory, $noteID) { //$mockData = {'Test Note title', "Test note text" } // return new Note($mockData); } } Or I could do it at a very low level, by creating a MockSQLQueryFactory that instead of actually querying the database just provides mock data back, and passing that to the current NoteMapper object. It seems that creating mocks at a high level would be easier in the short term, but that in the long term doing it at a low level would be more powerful and possibly allow more automation of tests e.g. by recording data in an out of a DB and then replaying that data for tests. Is there a recommended way of creating mocks? Are there any hard and fast rules about which are better, or should they both be used where appropriate?

    Read the article

  • PHP/MySQL Performance Testing with Just PHP

    - by Mike Gifford
    I'm trying to diagnose a server where the website is loading very slowly, but unfortunately my client has only provided me with FTP access. I've got FTP access so I can upload PHP scripts, but can't set up any other server side tools. I have access to phpMyAdmin, but not direct access to the MySQL server. It is also unfortunately a Windows server (and we've been a Linux shop for over a decade now). So, if I wan to evaluate MySQL & disk speed performance through PHP on a generic server, what is the best way to do this? There are already tools like: https://github.com/raphaelm/php-benchmark or https://github.com/InfinitySoft/php-benchmark But I'm surprised there isn't something that someone has already set up & configured to just run through and do some basic testing of a server's responsiveness. Every time we evaluate a new server environment it's handy to be able to compare it to an existing one quickly to see if there are any anomalies. I guess I'd just hoped that someone else had written up a script to do this already. I know I have, but that was before Github when there was a handy place to post scraps of code like this. Originally posted in http://stackoverflow.com/questions/12321498/php-mysql-performance-testing-with-just-php but it was recommended that I re-post it here.

    Read the article

  • Debian Wheezy (testing) df reported volume size

    - by TheRoadrunner
    I am a bit confused about the /dev/sda* references since I installed Wheezy instead of Squeeze on a testing box. fdisk -l returns: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e9623 Device Boot Start End Blocks Id System /dev/sda1 * 2048 480278527 240138240 83 Linux /dev/sda2 480280574 488396799 4058113 5 Extended /dev/sda5 480280576 488396799 4058112 82 Linux swap / Solaris This seems correct. But df -h /dev/sda (and /dev/sda1 and /dev/sda2 and /dev/sda5) returns: Filesystem Size Used Avail Use% Mounted on udev 10M 0 10M 0% /dev The same happens with every entry under /dev/disk/by-id and /dev/disk/by-path. Only one of two entries under /dev/disk/by-uuid returns the correct volume size: df -h /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 Filesystem Size Used Avail Use% Mounted on /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 229G 22G 196G 11% / Contents of /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=cacdbad6-7e6b-4e80-84ba-e3c77ef48796 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=45840d13-ee36-4e77-8e73-16cbdff25eb1 none swap sw 0 0 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0 It seems all other references than the uuid points to the swap partition. Is this because Wheezy is in testing, and should it be reported as an error?

    Read the article

  • Terminology: Difference between software interface, software component, software unit, software modu

    - by JamieH
    I see these terms used quite a lot between various authors, but I can't seem to fix upon definitive definitions. From my POV a software interface is a "type" specifying the way in which a software component may be used by other softare components. But what exactly a software component is I'm not entirely sure (and it seems no-one else is either). Same goes for software unit, and software module, although I suspect that a software unit is a smaller, ahem, unit than a component, and a software module has something to do with packaging. I hope this is not deemed (and downvoted) as frivulous, as I have serious intent in the asking.

    Read the article

  • Creating a Compilation Unit with type bindings

    - by hizki
    I am working with the AST API in java, and I am trying to create a Compilation Unit with type bindings. I wrote the following code: private static CompilationUnit parse(ICompilationUnit unit) { ASTParser parser = ASTParser.newParser(AST.JLS3); parser.setKind(ASTParser.K_COMPILATION_UNIT); parser.setSource(unit); parser.setResolveBindings(true); CompilationUnit compiUnit = (CompilationUnit) parser.createAST(null); return compiUnit; } Unfortunately, when I run this code in debug mode and inspect compiUnit I find that compiUnit.resolver.isRecoveringBindings is false. Can anyone think of a reason why it wouldn't be true, as I specified it to be? Thank you

    Read the article

  • Validate cyclic organization unit

    - by abmv
    I have a object Organization Unit and I have a self reference to it in the same object public class OrganizationUnit: IOrganizationUnit { private string fName; public string Name { get { return fName; } set { SetPropertyValue("Name", ref fName, (string) value); } } private OrganizationUnit fManagedBy; public IOrganizationUnit ManagedBy { get { return fManagedBy; } set { SetPropertyValue("ManagedBy", ref fManagedBy, (OrganizationUnit)value); } } } I need a method that will throw an exception if it finds a child organization unit in the third level is referencing a parent Organization unit, or to say cyclic parent organization. A is main B managed by A C

    Read the article

  • Unit Test ouput in MSBuild/TFS 2008

    - by Adam Jenkin
    I have a build in TFS 2008 which includes the running of a UnitTest project. I have configured my build as such that in the drop folder after each build, I get a StyleCop.log, FxCop.log and would like to place the trx or output from the unit tests here also. I can see that my unit tests are running as part of the build, however currently I cannot find were the output is saved to or find a way of setting the ouput to my drop location ($(DropLocation)\$(BuildNumber)\MyUnitTests.txt) My unit tests are included by using the following:- <RunTest>true</RunTest> ... <ItemGroup> <TestContainer Include="$(OutDir)\%2aMyUnitTests.dll" /> </ItemGroup> Can somebody help explain how I can achieve this.

    Read the article

  • Should tests be self written in TDD?

    - by martin
    We run a project, which we want to solve with test driven development. I thought about some questions that came up, when initiating the project. One question was, who should write the unit-test for a feature. Should the unit-test be written by the feature-implementing programmer? Or should the unit test be written by another programmer, who defines what a method should do and the feature-implementing programmer implements the method until the tests runs? If i understand the concept of TDD in the right way. The feature-implementing programmer has to write the test by himself, because TDD is procedure with mini-iterations. So it would be too complex to have the tests written by another programmer? What would you say, should the tests in TDD written by the programmer himself or should another programmer write the tests that describes what a method can do?

    Read the article

  • Should methods containing LINQ expressions be tested / mocked?

    - by Phil.Wheeler
    Assuming I have a class with a method that takes a System.Linq.Expressions.Expression as a parameter, how much value is there in unit testing it? public void IEnumerable<T> Find(Expression expression) { return someCollection.Where(expression).ToList(); } Unit testing or mocking these sorts of methods has been a mind-frying experience for me and I'm now at the point where I have to wonder whether it's all just not worth it. How would I unit test this method using some arbitrary expression like List<Animal> = myAnimalRepository.Find(x => x.Species == "Cat");

    Read the article

  • Problem using System.Xml in unit test in MonoDevelop (MonoTouch)

    - by hambonious
    I'm new to the MonoDevelop and MonoTouch environment so hopefully I'm just missing something easy here. When I have a unit test that requires the System.Xml or System.Xml.Linq namespaces, I get the following error when I run the test: System.IO.FileNotFoundException : Could not load file or assembly 'System.Xml.Linq, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. Things I've verified: I have the proper usings in the test. The project builds with no problems. Using these namespaces work fine when I run the app in the emulator. I've written a very simple unit test to prove that unit testing works at all (and it does). I'm a test driven kinda guy so I can't wait to get this working so I can progress with my app. Thanks in advance.

    Read the article

  • Get list of named queries in NHibernate

    - by Dan
    I have a dozen or so named queries in my NHibernate project and I want to execute them against a test database in unit tests to make sure the syntax still matches the changing domain/database model. Currently I have a unit test for each named query where I get and execute the query, for example: IQuery query = session.GetNamedQuery("GetPersonSummaries"); var personSummaryArray = query.List(); Assert.That(personSummaryArray, Is.Not.Null); This works fine, but I would like to have one unit test that loops thru all of the named queries and executes them. Is there a way to discover all of the available named queries? Thanks Dan

    Read the article

  • How to simulate a mouse click in Cocoa for the iPhone?

    - by eagle
    I'm trying to setup automated unit tests for an iPhone application. I'm using a UIWebBrowser and need to simulate clicks on different links. I've tried doing this with JavaScript, but it doesn't produce the same results as when the I manually click on the links. The main problem is with links that have their target property set. I believe the only way for this automated unit test to work correctly is to simulate a mouse click at a specific x/y coordinate (i.e. where the link is located). Since the unit testing will only be used internally, private API calls are fine. It seems like this should be possible since the iPhone app isimulate seems to do something similar. Is there any way to do this in the framework?

    Read the article

  • Mongomapper - unit testing with shoulda on rails 2.3.5

    - by egarcia
    I'm trying to implement shoulda unit tests on a rails 2.3.5 app using mongomapper. So far I've: Configured a rails app that uses mongomapper (the app works) Added shoulda to my gems, and installed it with rake gems:install Added config.frameworks -= [ :active_record, :active_resource ] to config/environment.rb so ActiveRecord isn't used. My models look like this: class Account include MongoMapper::Document key :name, String, :required => true key :description, String key :company_id, ObjectId key :_type, String belongs_to :company many :operations end My test for that model is this one: class AccountTest < Test::Unit::TestCase should_belong_to :company should_have_many :operations should_validate_presence_of :name end It fails on the first should_belong_to: ./test/unit/account_test.rb:3: undefined method `should_belong_to' for AccountTest:Class (NoMethodError) Any ideas why this doesn't work? Should I try something different from shoulda? I must point out that this is the first time I try to use shoulda, and I'm pretty new to testing itself.

    Read the article

  • Persistence unit is not persistent

    - by etam
    I need persistence unit that creates embedded database which stays persistent after closing EntityManager. This is my PU: <persistence-unit name="hello-jpa" transaction-type="RESOURCE_LOCAL"> <class>hello.jpa.User</class> <properties> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.format_sql" value="true"/> <property name="hibernate.dialect" value="org.hibernate.dialect.HSQLDialect"/> <property name="hibernate.connection.driver_class" value="org.hsqldb.jdbcDriver"/> <property name="hibernate.connection.username" value="sa"/> <property name="hibernate.connection.password" value=""/> <property name="hibernate.connection.url" value="jdbc:hsqldb:target/hsql.db"/> <property name="hibernate.hbm2ddl.auto" value="update"/> </properties> </persistence-unit> And it deletes data after closing application.

    Read the article

  • Cannot run unit tests for an application developed with Compact Framework for Windows CE 6.0 platfor

    - by Thomasek
    I'm developing a solution for Windows CE 6.0 using GuD_AtomKit X86 Device emulator. I'm not able to run any unit tests, because I get following error message: The test adapter ('Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestAdapter, Microsoft.VisualStudio.QualityTools.Tips.UnitTest.Adapter, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a') required to execute this test could not be loaded. Check that the test adapter is installed properly. Exception of type 'Microsoft.VisualStudio.SmartDevice.TestHostAdapter.DeviceAgent.TestAlreadyRunningException' was thrown. But there's no unit test running on the device. I would really appreciate your help.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >