Search Results

Search found 4765 results on 191 pages for 'gh unit'.

Page 73/191 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • In Ruby, why does a method invocation not be able to be treated as a unit when "do" and "end" is use

    - by Jian Lin
    The following question is related to http://stackoverflow.com/questions/2127836/ruby-print-inject-do-syntax The question is, can we insist on using DO and END and make it work with puts or p? This works: a = [1,2,3,4] b = a.inject do |sum, x| sum + x end puts b # prints out 10 so, is it correct to say, inject is a class method of the Array class, which takes a block of code, and then returns a number. If so, then it should be no different from calling a function and getting back a return value: b = foo(3) puts b or b = circle.getRadius() puts b In the above two cases, we can directly say puts foo(3) puts circle.getRadius() so, there is no way to make it work directly by using the following 2 ways: a = [1,2,3,4] puts a.inject do |sum, x| sum + x end but it gives ch01q2.rb:7:in `inject': no block given (LocalJumpError) from ch01q2.rb:4:in `each' from ch01q2.rb:4:in `inject' from ch01q2.rb:4 grouping the method call using ( ) doesn't work either: a = [1,2,3,4] puts (a.inject do |sum, x| sum + x end) and this gives: ch01q3.rb:4: syntax error, unexpected kDO_BLOCK, expecting ')' puts (a.inject do |sum, x| ^ ch01q3.rb:4: syntax error, unexpected '|', expecting '=' puts (a.inject do |sum, x| ^ ch01q3.rb:6: syntax error, unexpected kEND, expecting $end end) ^ finally, the following version works: a = [1,2,3,4] puts a.inject { |sum, x| sum + x } but why doesn't the grouping of the method invocation using ( ) work? What if a programmer insists that he uses do and end, can it be made to work directly with p or puts, without an extra temporary variable?

    Read the article

  • How do I write an RSpec test to unit-test this interesting metaprogramming code?

    - by Kyle Kaitan
    Here's some simple code that, for each argument specified, will add specific get/set methods named after that argument. If you write attr_option :foo, :bar, then you will see #foo/foo= and #bar/bar= instance methods on Config: module Configurator class Config def initialize() @options = {} end def self.attr_option(*args) args.each do |a| if not self.method_defined?(a) define_method "#{a}" do @options[:"#{a}"] ||= {} end define_method "#{a}=" do |v| @options[:"#{a}"] = v end else throw Exception.new("already have attr_option for #{a}") end end end end end So far, so good. I want to write some RSpec tests to verify this code is actually doing what it's supposed to. But there's a problem! If I invoke attr_option :foo in one of the test methods, that method is now forever defined in Config. So a subsequent test will fail when it shouldn't, because foo is already defined: it "should support a specified option" do c = Configurator::Config c.attr_option :foo # ... end it "should support multiple options" do c = Configurator::Config c.attr_option :foo, :bar, :baz # Error! :foo already defined # by a previous test. # ... end Is there a way I can give each test an anonymous "clone" of the Config class which is independent of the others?

    Read the article

  • How can I determine which dependency would cause a C++ compilation unit to be rebuilt?

    - by Seb Rose
    I have a legacy C++ application with a deep graph of #includes. Changes to any header file often cause recompiles of seemingly unrelated source files. The application is built using a Visual Studio 2005 solution (sln) file. Can MSBUILD be invoked in a way that it reports which dependency(ies) are causing a source file to be recompiled? Is there any other tool that might be able to help? NOTE: I'm only looking for a tool to tell me why a file would be rebuilt, not some restrospective magic telling me why it was rebuilt.

    Read the article

  • In Ruby, why is a method invocation not able to be treated as a unit when "do" and "end" is used?

    - by Jian Lin
    The following question is related to the question "Ruby Print Inject Do Syntax". My question is, can we insist on using do and end and make it work with puts or p? This works: a = [1,2,3,4] b = a.inject do |sum, x| sum + x end puts b # prints out 10 so, is it correct to say, inject is an instance method of the Array object, and this instance method takes a block of code, and then returns a number. If so, then it should be no different from calling a function or method and getting back a return value: b = foo(3) puts b or b = circle.getRadius() puts b In the above two cases, we can directly say puts foo(3) puts circle.getRadius() so, there is no way to make it work directly by using the following 2 ways: a = [1,2,3,4] puts a.inject do |sum, x| sum + x end but it gives ch01q2.rb:7:in `inject': no block given (LocalJumpError) from ch01q2.rb:4:in `each' from ch01q2.rb:4:in `inject' from ch01q2.rb:4 grouping the method call using ( ) doesn't work either: a = [1,2,3,4] puts (a.inject do |sum, x| sum + x end) and this gives: ch01q3.rb:4: syntax error, unexpected kDO_BLOCK, expecting ')' puts (a.inject do |sum, x| ^ ch01q3.rb:4: syntax error, unexpected '|', expecting '=' puts (a.inject do |sum, x| ^ ch01q3.rb:6: syntax error, unexpected kEND, expecting $end end) ^ finally, the following version works: a = [1,2,3,4] puts a.inject { |sum, x| sum + x } but why doesn't the grouping of the method invocation using ( ) work in the earlier example? What if a programmer insist that he uses do and end, can it be made to work?

    Read the article

  • To use package properly, how to arrange directory, file name, unit test file?

    - by Stephen Hsu
    My source files tree is like this: /src /pkg /foo foo.go foo_test.go Inside foo.go: package foo func bar(n int) { ... } inside foo_test.go: package foo func testBar(t *testing.T) { bar(10) ... } My questions are: Does package name relates to directory name, source file name? If there is only one source file for a package, need I put it in a directory? Should I put foo.go and foo_test.go in the same package? In the foo_test.go, as it's in the same package as foo.go, I didn't import foo. But when I compile foo_test.go with 6g, it says bar() is undefined. What should I do?

    Read the article

  • StructureMap: How can i unit test the registry class?

    - by Marius
    I have a registry class like this: public class StructureMapRegistry : Registry { public StructureMapRegistry() { For<IDateTimeProvider>().Singleton().Use<DateTimeProviderReturningDateTimeNow>(); } I want to test that the configuration is according to my intent, so i start writing a test: public class WhenConfiguringIOCContainer : Scenario { private TfsTimeMachine.Domain.StructureMapRegistry registry; private Container container; protected override void Given() { registry = new TfsTimeMachine.Domain.StructureMapRegistry(); container = new Container(); } protected override void When() { container.Configure(i => i.AddRegistry(registry)); } [Then] public void DateTimeProviderIsRegisteredAsSingleton() { // I want to say "verify that the container contains the expected type and that the expected type // is registered as a singleton } } How can verify that the registry is accoring to my expectations? Note: I introduced the container because I didn't see any sort of verification methods available on the Registry class. Idealy, I want to test on the registry class directly.

    Read the article

  • Need data on disk drive management by OS: getting base I/O unit size, “sync” option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

  • Is an Object the smallest pageable unit in the Heap?

    - by DonnieKun
    Hi, If I have a 2 GB ram and I have an 2 instances of an Object which is 1.5 GB each, the operating system will help and context switch the pages to and from harddisk. What if I have 1 instances but is 3 GB. Can the same paging method breakdown this instances into 2 pages? Or will I encounter out-of-memory issue? Thanks.

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • ExecutionException and InterruptedException while using Future class's get() method

    - by java_geek
    ExecutorService executor = Executors.newSingleThreadExecutor(); try { Task t = new Task(response,inputToPass,pTypes,unit.getInstance(),methodName,unit.getUnitKey()); Future<SCCallOutResponse> fut = executor.submit(t); response = fut.get(unit.getTimeOut(),TimeUnit.MILLISECONDS); } catch (TimeoutException e) { // if the task is still running, a TimeOutException will occur while fut.get() cat.error("Unit " + unit.getUnitKey() + " Timed Out"); response.setVote(SCCallOutConsts.TIMEOUT); } catch (InterruptedException e) { cat.error(e); } catch (ExecutionException e) { cat.error(e); } finally { executor.shutdown(); } } How should i handle the InterruptedException and ExecutionException in the code? And in what cases are these exceptions thrown?

    Read the article

  • app engine's back referencing is too slow. How can I make it faster?

    - by Ray Yun
    Google app engine has smart feature named back references and I usually iterate them where the traditional SQL's computed column need to be used. Just imagine that need to accumulate specific force's total hp. class Force(db.Model): hp = db.IntegerProperty() class UnitGroup(db.Model): force = db.ReferenceProperty(reference_class=Force,collection_name="groups") hp = db.IntegerProperty() class Unit(db.Model): group = db.ReferenceProperty(reference_class=UnitGroup,collection_name="units") hp = db.IntegerProperty() When I code like following, it was horribly slow (almost 3s) with 20 forces with single group - single unit. (I guess back-referencing force reload sub entities. Am I right?) def get_hp(self): hp = 0 for group in self.groups: group_hp = 0 for unit in group.units: group_hp += unit.hp hp += group_hp return hp How can I optimize this code? Please consider that there are more properties should be computed for each force/unit-groups and I don't want to save these collective properties to each entities. :)

    Read the article

  • SPARC T4 ??????: SPARC T4 ??????????!!

    - by user13138700
    ?? 2011 ? 9 ?? SPARC T4 CPU ???????? SPARC T4 ????????????????2011??10?????????????????????????? ????????????????????SPARC T4 ?????????????????????????????????????????????????????????? SPARC T4 CPU ???? SPARC T4 ?????????????????????????????????? ??????????????????????4/4, 4/5, 4/6 ? 3???????? Oracle Open World 2012 ???????? Oracle Open World 2012 Tokyo ?? Oracle ?????&????? ??? Oracle Solaris ????????????·????????? SPARC&Solaris ??????????????SPARC&Solaris ????????????????????????????????????????????????????????????????????????? Oracle OpenWorld Tokyo 2012 ???? URL http://www.oracle.com/openworld/jp-ja/index.html ?????? 7264 ??????????????? ????Oracle Open World 2012 Tokyo ?????????????????????????SPARC T4 ????? ????????????????? SPARC T4 ????????? SPARC T3 ????????(S2??)??????????????????????????(S3??)??????????????????? ???????" T " ???????????????(?)?????? SPARC T1/T2/T3 ???????????????????????????(????????)????????????????????????? ?SPARC T4 ????????????????????????????? ?SPARC T4 ???????DB?????????????????????????????? ???????????????? ????????????????????????????????????????????? ???? SPARC T3 ???????????????????????????2???????????? ????????????????????????????????????????????????????? ?????????????? SPARC T4 ????????????????????????????????????SPARC T4 ????????? SPARC T4 ??????????????????????????????????????????? ?????????????? T4 ??????????????????? SPARC ???????????????????????????????????????????????????????????????????&??????????? ?????????????????????????????????????????????????????????Web?????????????DB?????????????????????????????????????? (????????????) ???????????? SPARC T4 ????????????????????????????? < T4 ???????? > ??? SPARC ??(S3??)??? x5??????????????????? x2????????????????????? Crypto (?????)?????????? ?????????????????????????/???????????????? ?????? 1, 2,& 4 ??????????? < T4 ????? ??????? > 8x SPARC S3 ?? (64????/???) 4MB ?? L3 ????? (8???/16???) 8x9 ????? 4x DDR3 ??????????? @6.4Gbps 6x ?????????? @9.6Gbps 2x8 PCIe 2.0 (5GTS) 2x10Gb XAUI ??????? < S3???????????? > ALU : Arithmetic Logic Unit BRU : Branch Logic Unit FGU : Flouting-point Graphics Unit IRF : Integer Register File FRF : Flouting-point Register File WRF : Working Register File MMU : Memory Management Unit LSU : Load Store Unit Crypto(SPU) : Streaming Processing Unit TRU : Trap Logic Unit < S3????????? > ????? 8????/?? ?????? Out-of-Order ?? 16???????????????? ????????????? ???????????? ??????? ????????? 64???? ITLB ? 128???? DTLB 64KB 4??? L1 ?????????????? 128KB 8??? ???? L2 ????? < T4 ???????? vs T3 ???????? > T4 ????????????? Out-Of-Order ???? Pick ???????? In-Order ?? Pick ?????? Commit ??????? Out-Of-Order ?? Commit ?????? In-Order ?? < T4 ?????????? > ???????????vs????????????????????????????? ????????Active??????????????????? ???????????????????????? ??????????????????? < T4vsT1/T2/T3 ??????? > SPARC T4 ???? T3????????Web??????????? DB?????????????????????????????? ????????????????????SPARC T4 ?????&Solaris ?????????????(????????)??????????????????????????????????????????????????????????!!? ????Oracle Open World 2012 Tokyo ????????????????SPARC T4 ?????????????????????? 4/4, 4/5, 4/6 ?3????????????????????????????????????????????????????????????????????????????????????? ????????????????? URL http://www.oracle.com/openworld/jp-ja/exhibit/index.html

    Read the article

  • Display part of an XML file while parsing it

    - by Andy M
    Hey, Consider the following XML file : <cookbook> <recipe xml:id="MushroomSoup"> <title>Quick and Easy Mushroom Soup</title> <ingredient name="Fresh mushrooms" quantity="7" unit="pieces"/> <ingredient name="Garlic" quantity="1" unit="cloves"/> </recipe> <recipe xml:id="AnotherRecipe"> <title>XXXXXXX</title> <ingredient name="Tomatoes" quantity="8" unit="pieces"/> <ingredient name="PineApples" quantity="2" unit="cloves"/> </recipe> </cookbook> Let's say I want to parse this file and gather each recipe as XML, each one as a separated QString. For example, I would like to have a QString that contains : <recipe xml:id="MushroomSoup"> <title>Quick and Easy Mushroom Soup</title> <ingredient name="Fresh mushrooms" quantity="7" unit="pieces"/> <ingredient name="Garlic" quantity="1" unit="cloves"/> </recipe> How could I do this ? Do you guys know a quick and clean method to perform this ? Thanks in advance for your help !

    Read the article

  • intelligent path truncation/ellipsis for display

    - by peterchen
    I am looking for an existign path truncation algorithm (similar to what the Win32 static control does with SS_PATHELLIPSIS) for a set of paths that should focus on the distinct elements. For example, if my paths are like this: Unit with X/Test 3V/ Unit with X/Test 4V/ Unit with X/Test 5V/ Unit without X/Test 3V/ Unit without X/Test 6V/ Unit without X/2nd Test 6V/ When not enough display space is available, they should be truncated to something like this: ...with X/...3V/ ...with X/...4V/ ...with X/...5V/ ...without X/...3V/ ...without X/...6V/ ...without X/2nd ...6V/ (Assuming that an ellipsis generally is shorter than three letters). This is just an example of a rather simple, ideal case (e.g. they'd all end up at different lengths now, and I wouldn't know how to create a good suggestion when a path "Thingie/Long Test/" is added to the pool). There is no given structure of the path elements, they are assigned by the user, but often items will have similar segments. It should work for proportional fonts, so the algorithm should take a measure function (and not call it to heavily) or generate a suggestion list. Data-wise, a typical use case would contain 2..4 path segments anf 20 elements per segment. I am looking for previous attempts into that direction, and if that's solvable wiht sensible amount of code or dependencies.

    Read the article

  • nhibernate - mapping with contraints

    - by Tobias Müller
    Hello everybody, I am having a Problem with my nhibernate-mapping and I can't find a solution by searching on stackoverflow/google/documentation. The database I am using has (amongst others) two tables. One is unit with the following fields: id enduring_id starts ends damage_enduring_id [...] The other one is damage, which has the following fields: id enduring_id starts ends [...] The units are assigned to a damage and one damage can have zero, one or more units working on it. Every time a unit moves to annother damage, the dataset is copied. The field "ends" of the old record and "starts" of the new record are set to the current time stamp, enduring_id stays the same. So if I want to know which units were working on a damage at a certain time, I do the following select: select * from unit join damage on damage.enduring_id = unit.damage_enduring_id where unit.starts <= 'time' and unit.ends = 'time' (This is not an actualy query from the database, I made it up to make clear what I mean. The the real database is a little more complex) Now I want to map it that way, so I can load all the damages which are valid at one time (starts <= wanted time <= ends) and that each of them has a Bag with all the attached units at that time (again starts <= wanted time <= ends). Is this possible within the mapping? Sorry if this is a stupid question, but I am pretty new to nhibernate and I have no clue how to do it. Thanks a lot for reading my post! Bye, Tobias

    Read the article

  • Deleting a resource in a Cucumber (Capybara) step doesn't work

    - by Josiah Kiehl
    Here is my Scenario: Scenario: Delete a match Given pojo is logged in And there is a match with the following: | game_id | 1 | | name | Game del Pojo | | date_and_time | 2010-02-23 17:52:00 | | players | 2 | | teams | 2 | | comment | This is an awesome comment | | user_id | 1 | And I am on the show match 1 page And show me the page When I follow "Delete" And I follow "Yes, delete it" Then there should not be a match with the following: | game_id | 1 | | name | Game del Pojo | | date_and_time | 2010-02-23 17:52:00 | | players | 2 | | teams | 2 | | comment | This is an awesome comment | | user_id | 1 | If I walk through these steps manually, they work. When I click the confirmation: Yes, delete it, then the match is deleted. Cucumber, however, fails to delete the record and the last step fails. And I follow "Yes, delete it" # features/step_definitions/web_steps.rb:32 Then there should not be a match with the following: # features/step_definitions/match_steps.rb:8 | game_id | 1 | | name | Game del Pojo | | date_and_time | 2010-02-23 17:52:00 | | players | 2 | | teams | 2 | | comment | This is an awesome comment | | user_id | 1 | <nil> expected but was <#<Match id: 1, name: "Game del Pojo", date_and_time: "2010-02-23 17:52:00", teams: 2, created_at: "2010-03-02 23:06:33", updated_at: "2010-03-02 23:06:33", comment: "This is an awesome comment", players: 2, game_id: 1, user_id: 1>>. (Test::Unit::AssertionFailedError) /usr/lib/ruby/1.8/test/unit/assertions.rb:48:in `assert_block' /usr/lib/ruby/1.8/test/unit/assertions.rb:495:in `_wrap_assertion' /usr/lib/ruby/1.8/test/unit/assertions.rb:46:in `assert_block' /usr/lib/ruby/1.8/test/unit/assertions.rb:83:in `assert_equal' /usr/lib/ruby/1.8/test/unit/assertions.rb:172:in `assert_nil' ./features/step_definitions/match_steps.rb:22:in `/^there should (not)? be a match with the following:$/' features/matches.feature:124:in `Then there should not be a match with the following:' Any clue how to debug this? Thanks!

    Read the article

  • Returning a lua table on SWIG call

    - by Tom J Nowell
    I have a class with a methodcalled GetEnemiesLua. I have bound this class to lua using SWIG, and I can call this method using my lua code. I am trying to get the method to return a lua table of objects. Here is my current code: void CSpringGame::GetEnemiesLua(){ std::vector<springai::Unit*> enemies = callback->GetEnemyUnits(); if( enemies.empty()){ lua_pushnil(ai->L); return; } else{ lua_newtable(ai->L); int top = lua_gettop(ai->L); int index = 1; for (std::vector<springai::Unit*>::iterator it = enemies.begin(); it != enemies.end(); ++it) { //key lua_pushinteger(ai->L,index);//lua_pushstring(L, key); //value CSpringUnit* unit = new CSpringUnit(callback,*it,this); ai->PushIUnit(unit); lua_settable(ai->L, -3); ++index; } ::lua_pushvalue(ai->L,-1); } } PushIUnit is as follows: void CTestAI::PushIUnit(IUnit* unit){ SWIG_NewPointerObj(L,unit,SWIGTYPE_p_IUnit,1); } To test this I have the following code: t = game:GetEnemiesLua() if t == nil then game:SendToConsole("t is nil! ") end The result is always 't is nil', despite this being incorrect. I have put breakpoints in the code and it is indeed going over the loop, rather than doing lua_pushnil. So how do I make my method return a table when called via lua?

    Read the article

  • New TPerlRegEx Compatible with Delphi XE

    - by Jan Goyvaerts
    The new RegularExpressionsCore unit in Delphi XE is based on the PerlRegEx unit that I wrote many years ago. Since I donated full rights to a copy rather than full rights to the original, I can continue to make my version of TPerlRegEx available to people using older versions of Delphi. I did make a few changes to the code to modernize it a bit prior to donating a copy to Embarcadero. The latest TPerlRegEx includes those changes. This allows you to use the same regex-based code using the RegularExpressionsCore unit in Delphi XE, and the PerlRegEx unit in Delphi 2010 and earlier. If you’re writing new code using regular expressions in Delphi 2010 or earlier, I strongly recomment you use the new version of my PerlRegEx unit. If you later migrate your code to Delphi XE, all you have to do is replace PerlRegEx with RegularExrpessionsCore in the uses clause of your units. If you have code written using an older version of TPerlRegEx that you want to migrate to the latest TPerlRegEx, you’ll need to take a few changes into account. The original TPerlRegEx was developed when Borland’s goal was to have a component for everything on the component palette. So the old TPerlRegEx derives from TComponent, allowing you to put it on the component palette and drop it on a form. The new TPerlRegEx derives from TObject. It can only be instantiated at runtime. If you want to migrate from an older version of TPerlRegEx to the latest TPerlRegEx, start with removing any TPerlRegEx components you may have placed on forms or data modules and instantiate the objects at runtime instead. When instantiating at runtime, you no longer need to pass an owner component to the Create() constructor. Simply remove the parameter. Some of the property and method names in the original TPerlRegEx were a bit unwieldy. These have been renamed in the latest TPerlRegEx. Essentially, in all identifiers SubExpression was replaced with Group and MatchedExpression was replaced with Matched. Here is a complete list of the changed identifiers: Old Identifier New Identifier StoreSubExpression StoreGroups NamedSubExpression NamedGroup MatchedExpression MatchedText MatchedExpressionLength MatchedLength MatchedExpressionOffset MatchedOffset SubExpressionCount GroupCount SubExpressions Groups SubExpressionLengths GroupLengths SubExpressionOffsets GroupOffsets Download TPerlRegEx. Source is included under the MPL 1.1 license.

    Read the article

  • Help with Strategy-game AI

    - by f20k
    Hi, I am developing a strategy-game AI (think: Final Fantasy Tactics), and I am having trouble coming up for the design of the AI. My main problem is determining which is the optimal thing for it to do. First let me describe the priority of what action I would like the AI to take: Kill nearest player unit Fulfill primary directive (kill all player units, kill target unit, survive for x turns) Heal ally unit / cast buffer Now the AI can do the following in its turn: Move - {Attack / Ability / Item} (either attack or ability or item) {Attack / Ability / Item} - Move Move closer (if targets not in range) {Attack / Ability / Item} (if move not available) Notes Abilities have various ranges / effects / costs / effects. Each ai unit has maybe 5-10 abilities to choose from. The AI will prioritize killing over safety unless its directive is to survive for x turns. It also doesn't care about ability cost much. While a player may want to save a big spell for later, the AI will most likely use it asap. Movement is on a (hex) grid num of player units: 3-6 num of ai units: 3-7 or more. Probably max 10. AI and player take turns controlling ONE unit, instead of all at the same time. Platform is Android (if program doesnt respond after some time, there will be a popup saying to Force Quit or Wait - which looks really bad!). Now comes the questions: The best ability to use would obviously be the one that hits the most targets for the most damage. But since each ability has different ranges, I won't know if they are in range without exploring each possible place I can move to. One solution would be to go through each possible places to move to, determine the optimal attack at that location - which gives me a list of optimal moves for each location. Then choose the optimal out of the list and execute it. But this will take a lot of CPU time. Is there a better solution? My current idea is to move as close as possible towards the closest, largest group of people, and determine the optimal attack/ability from there. I think this would be a lot less work for the CPU and still allow for wide-range attacks. Its sub-optimal but the AI will still seem 'smart'. Other notes/questions: Am I over-thinking/over-complicating it? Better solution? I am open to all sorts of suggestions I have taken a look at the spell-casting question, but it doesn't take into account the movement - so perhaps use that algo for each possible move location? The top answer mentioned it wasn't great for area-of-effect and group fights - so maybe requires more tweaking? Please, if you mention a graph/tree, let me know basically how to use it. E.g. Node means ability, level corresponds to damage, then search for the deepest node.

    Read the article

  • Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache

    - by Ricardo Ferreira
    The concept and usage of data grids are becoming very popular in this days since this type of technology are evolving very fast with some cool lead products like Oracle Coherence. Once for a while, developers need an programmatic way to calculate the total size of a specific cache that are residing in the data grid. In this post, I will show how to accomplish this using Oracle Coherence API. This example has been tested with 3.6, 3.7 and 3.7.1 versions of Oracle Coherence. To start the development of this example, you need to create a POJO ("Plain Old Java Object") that represents a data structure that will hold user data. This data structure will also create an internal fat so I call that should increase considerably the size of each instance in the heap memory. Create a Java class named "Person" as shown in the listing below. package com.oracle.coherence.domain; import java.io.Serializable; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Random; @SuppressWarnings("serial") public class Person implements Serializable { private String firstName; private String lastName; private List<Object> fat; private String email; public Person() { generateFat(); } public Person(String firstName, String lastName, String email) { setFirstName(firstName); setLastName(lastName); setEmail(email); generateFat(); } private void generateFat() { fat = new ArrayList<Object>(); Random random = new Random(); for (int i = 0; i < random.nextInt(18000); i++) { HashMap<Long, Double> internalFat = new HashMap<Long, Double>(); for (int j = 0; j < random.nextInt(10000); j++) { internalFat.put(random.nextLong(), random.nextDouble()); } fat.add(internalFat); } } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } } Now let's create a Java program that will start a data grid into Coherence and will create a cache named "People", that will hold people instances with sequential integer keys. Each person created in this program will trigger the execution of a custom constructor created in the People class that instantiates an internal fat (the random amount of data generated to increase the size of the object) for each person. Create a Java class named "CreatePeopleCacheAndPopulateWithData" as shown in the listing below. package com.oracle.coherence.demo; import com.oracle.coherence.domain.Person; import com.tangosol.net.CacheFactory; import com.tangosol.net.NamedCache; public class CreatePeopleCacheAndPopulateWithData { public static void main(String[] args) { // Asks Coherence for a new cache named "People"... NamedCache people = CacheFactory.getCache("People"); // Creates three people that will be putted into the data grid. Each person // generates an internal fat that should increase its size in terms of bytes... Person pessoa1 = new Person("Ricardo", "Ferreira", "[email protected]"); Person pessoa2 = new Person("Vitor", "Ferreira", "[email protected]"); Person pessoa3 = new Person("Vivian", "Ferreira", "[email protected]"); // Insert three people at the data grid... people.put(1, pessoa1); people.put(2, pessoa2); people.put(3, pessoa3); // Waits for 5 minutes until the user runs the Java program // that calculates the total size of the people cache... try { System.out.println("---> Waiting for 5 minutes for the cache size calculation..."); Thread.sleep(300000); } catch (InterruptedException ie) { ie.printStackTrace(); } } } Finally, let's create a Java program that, using the Coherence API and JMX, will calculate the total size of each cache that the data grid is currently managing. The approach used in this example was retrieve every cache that the data grid are currently managing, but if you are interested on an specific cache, the same approach can be used, you should only filter witch cache will be looked for. Create a Java class named "CalculateTheSizeOfPeopleCache" as shown in the listing below. package com.oracle.coherence.demo; import java.text.DecimalFormat; import java.util.Map; import java.util.Set; import java.util.TreeMap; import javax.management.MBeanServer; import javax.management.MBeanServerFactory; import javax.management.ObjectName; import com.tangosol.net.CacheFactory; public class CalculateTheSizeOfPeopleCache { @SuppressWarnings({ "unchecked", "rawtypes" }) private void run() throws Exception { // Enable JMX support in this Coherence data grid session... System.setProperty("tangosol.coherence.management", "all"); // Create a sample cache just to access the data grid... CacheFactory.getCache(MBeanServerFactory.class.getName()); // Gets the JMX server from Coherence data grid... MBeanServer jmxServer = getJMXServer(); // Creates a internal data structure that would maintain // the statistics from each cache in the data grid... Map cacheList = new TreeMap(); Set jmxObjectList = jmxServer.queryNames(new ObjectName("Coherence:type=Cache,*"), null); for (Object jmxObject : jmxObjectList) { ObjectName jmxObjectName = (ObjectName) jmxObject; String cacheName = jmxObjectName.getKeyProperty("name"); if (cacheName.equals(MBeanServerFactory.class.getName())) { continue; } else { cacheList.put(cacheName, new Statistics(cacheName)); } } // Updates the internal data structure with statistic data // retrieved from caches inside the in-memory data grid... Set<String> cacheNames = cacheList.keySet(); for (String cacheName : cacheNames) { Set resultSet = jmxServer.queryNames( new ObjectName("Coherence:type=Cache,name=" + cacheName + ",*"), null); for (Object resultSetRef : resultSet) { ObjectName objectName = (ObjectName) resultSetRef; if (objectName.getKeyProperty("tier").equals("back")) { int unit = (Integer) jmxServer.getAttribute(objectName, "Units"); int size = (Integer) jmxServer.getAttribute(objectName, "Size"); Statistics statistics = (Statistics) cacheList.get(cacheName); statistics.incrementUnit(unit); statistics.incrementSize(size); cacheList.put(cacheName, statistics); } } } // Finally... print the objects from the internal data // structure that represents the statistics from caches... cacheNames = cacheList.keySet(); for (String cacheName : cacheNames) { Statistics estatisticas = (Statistics) cacheList.get(cacheName); System.out.println(estatisticas); } } public MBeanServer getJMXServer() { MBeanServer jmxServer = null; for (Object jmxServerRef : MBeanServerFactory.findMBeanServer(null)) { jmxServer = (MBeanServer) jmxServerRef; if (jmxServer.getDefaultDomain().equals(DEFAULT_DOMAIN) || DEFAULT_DOMAIN.length() == 0) { break; } jmxServer = null; } if (jmxServer == null) { jmxServer = MBeanServerFactory.createMBeanServer(DEFAULT_DOMAIN); } return jmxServer; } private class Statistics { private long unit; private long size; private String cacheName; public Statistics(String cacheName) { this.cacheName = cacheName; } public void incrementUnit(long unit) { this.unit += unit; } public void incrementSize(long size) { this.size += size; } public long getUnit() { return unit; } public long getSize() { return size; } public double getUnitInMB() { return unit / (1024.0 * 1024.0); } public double getAverageSize() { return size == 0 ? 0 : unit / size; } public String toString() { StringBuffer sb = new StringBuffer(); sb.append("\nCache Statistics of '").append(cacheName).append("':\n"); sb.append(" - Total Entries of Cache -----> " + getSize()).append("\n"); sb.append(" - Used Memory (Bytes) --------> " + getUnit()).append("\n"); sb.append(" - Used Memory (MB) -----------> " + FORMAT.format(getUnitInMB())).append("\n"); sb.append(" - Object Average Size --------> " + FORMAT.format(getAverageSize())).append("\n"); return sb.toString(); } } public static void main(String[] args) throws Exception { new CalculateTheSizeOfPeopleCache().run(); } public static final DecimalFormat FORMAT = new DecimalFormat("###.###"); public static final String DEFAULT_DOMAIN = ""; public static final String DOMAIN_NAME = "Coherence"; } I've commented the overall example so, I don't think that you should get into trouble to understand it. Basically we are dealing with JMX. The first thing to do is enable JMX support for the Coherence client (ie, an JVM that will only retrieve values from the data grid and will not integrate the cluster) application. This can be done very easily using the runtime "tangosol.coherence.management" system property. Consult the Coherence documentation for JMX to understand the possible values that could be applied. The program creates an in memory data structure that holds a custom class created called "Statistics". This class represents the information that we are interested to see, which in this case are the size in bytes and in MB of the caches. An instance of this class is created for each cache that are currently managed by the data grid. Using JMX specific methods, we retrieve the information that are relevant for calculate the total size of the caches. To test this example, you should execute first the CreatePeopleCacheAndPopulateWithData.java program and after the CreatePeopleCacheAndPopulateWithData.java program. The results in the console should be something like this: 2012-06-23 13:29:31.188/4.970 Oracle Coherence 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded operational configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/tangosol-coherence.xml" 2012-06-23 13:29:31.219/5.001 Oracle Coherence 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded operational overrides from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/tangosol-coherence-override-dev.xml" 2012-06-23 13:29:31.219/5.001 Oracle Coherence 3.6.0.4 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/tangosol-coherence-override.xml" is not specified 2012-06-23 13:29:31.266/5.048 Oracle Coherence 3.6.0.4 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified Oracle Coherence Version 3.6.0.4 Build 19111 Grid Edition: Development mode Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved. 2012-06-23 13:29:33.156/6.938 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded Reporter configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/reports/report-group.xml" 2012-06-23 13:29:33.500/7.282 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded cache configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/coherence-cache-config.xml" 2012-06-23 13:29:35.391/9.173 Oracle Coherence GE 3.6.0.4 <D4> (thread=Main Thread, member=n/a): TCMP bound to /192.168.177.133:8090 using SystemSocketProvider 2012-06-23 13:29:37.062/10.844 Oracle Coherence GE 3.6.0.4 <Info> (thread=Cluster, member=n/a): This Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2) joined cluster "cluster:0xC4DB" with senior Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2) 2012-06-23 13:29:37.172/10.954 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Cluster with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Management with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service DistributedCache with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Started cluster Name=cluster:0xC4DB Group{Address=224.3.6.0, Port=36000, TTL=4} MasterMemberSet ( ThisMember=Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle) OldestMember=Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith) ActualMemberSet=MemberSet(Size=2, BitSetCount=2 Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith) Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle) ) RecycleMillis=1200000 RecycleSet=MemberSet(Size=0, BitSetCount=0 ) ) TcpRing{Connections=[1]} IpMonitor{AddressListSize=0} 2012-06-23 13:29:37.891/11.673 Oracle Coherence GE 3.6.0.4 <D5> (thread=Invocation:Management, member=2): Service Management joined the cluster with senior service member 1 2012-06-23 13:29:39.203/12.985 Oracle Coherence GE 3.6.0.4 <D5> (thread=DistributedCache, member=2): Service DistributedCache joined the cluster with senior service member 1 2012-06-23 13:29:39.297/13.079 Oracle Coherence GE 3.6.0.4 <D4> (thread=DistributedCache, member=2): Asking member 1 for 128 primary partitions Cache Statistics of 'People': - Total Entries of Cache -----> 3 - Used Memory (Bytes) --------> 883920 - Used Memory (MB) -----------> 0.843 - Object Average Size --------> 294640 I hope that this post could save you some time when calculate the total size of Coherence cache became a requirement for your high scalable system using data grids. See you!

    Read the article

  • idiomatic property changed notification in scala?

    - by Jeremy Bell
    I'm trying to find a cleaner alternative (that is idiomatic to Scala) to the kind of thing you see with data-binding in WPF/silverlight data-binding - that is, implementing INotifyPropertyChanged. First, some background: In .Net WPF or silverlight applications, you have the concept of two-way data-binding (that is, binding the value of some element of the UI to a .net property of the DataContext in such a way that changes to the UI element affect the property, and vise versa. One way to enable this is to implement the INotifyPropertyChanged interface in your DataContext. Unfortunately, this introduces a lot of boilerplate code for any property you add to the "ModelView" type. Here is how it might look in Scala: trait IDrawable extends INotifyPropertyChanged { protected var drawOrder : Int = 0 def DrawOrder : Int = drawOrder def DrawOrder_=(value : Int) { if(drawOrder != value) { drawOrder = value OnPropertyChanged("DrawOrder") } } protected var visible : Boolean = true def Visible : Boolean = visible def Visible_=(value: Boolean) = { if(visible != value) { visible = value OnPropertyChanged("Visible") } } def Mutate() : Unit = { if(Visible) { DrawOrder += 1 // Should trigger the PropertyChanged "Event" of INotifyPropertyChanged trait } } } For the sake of space, let's assume the INotifyPropertyChanged type is a trait that manages a list of callbacks of type (AnyRef, String) = Unit, and that OnPropertyChanged is a method that invokes all those callbacks, passing "this" as the AnyRef, and the passed-in String). This would just be an event in C#. You can immediately see the problem: that's a ton of boilerplate code for just two properties. I've always wanted to write something like this instead: trait IDrawable { val Visible = new ObservableProperty[Boolean]('Visible, true) val DrawOrder = new ObservableProperty[Int]('DrawOrder, 0) def Mutate() : Unit = { if(Visible) { DrawOrder += 1 // Should trigger the PropertyChanged "Event" of ObservableProperty class } } } I know that I can easily write it like this, if ObservableProperty[T] has Value/Value_= methods (this is the method I'm using now): trait IDrawable { // on a side note, is there some way to get a Symbol representing the Visible field // on the following line, instead of hard-coding it in the ObservableProperty // constructor? val Visible = new ObservableProperty[Boolean]('Visible, true) val DrawOrder = new ObservableProperty[Int]('DrawOrder, 0) def Mutate() : Unit = { if(Visible.Value) { DrawOrder.Value += 1 } } } // given this implementation of ObservableProperty[T] in my library // note: IEvent, Event, and EventArgs are classes in my library for // handling lists of callbacks - they work similarly to events in C# class PropertyChangedEventArgs(val PropertyName: Symbol) extends EventArgs("") class ObservableProperty[T](val PropertyName: Symbol, private var value: T) { protected val propertyChanged = new Event[PropertyChangedEventArgs] def PropertyChanged: IEvent[PropertyChangedEventArgs] = propertyChanged def Value = value; def Value_=(value: T) { if(this.value != value) { this.value = value propertyChanged(this, new PropertyChangedEventArgs(PropertyName)) } } } But is there any way to implement the first version using implicits or some other feature/idiom of Scala to make ObservableProperty instances function as if they were regular "properties" in scala, without needing to call the Value methods? The only other thing I can think of is something like this, which is more verbose than either of the above two versions, but is still less verbose than the original: trait IDrawable { private val visible = new ObservableProperty[Boolean]('Visible, false) def Visible = visible.Value def Visible_=(value: Boolean): Unit = { visible.Value = value } private val drawOrder = new ObservableProperty[Int]('DrawOrder, 0) def DrawOrder = drawOrder.Value def DrawOrder_=(value: Int): Unit = { drawOrder.Value = value } def Mutate() : Unit = { if(Visible) { DrawOrder += 1 } } }

    Read the article

  • Configuring ant to run unit tests. Where should libraries be? How should classpath be configured? av

    - by chillitom
    Hi All, I'm trying to run my junit tests using ant. The tests are kicked off using a JUnit 4 test suite. If I run this direct from Eclipse the tests complete without error. However if I run it from ant then many of the tests fail with this error repeated over and over until the junit task crashes. [junit] java.util.zip.ZipException: error in opening zip file [junit] at java.util.zip.ZipFile.open(Native Method) [junit] at java.util.zip.ZipFile.(ZipFile.java:114) [junit] at java.util.zip.ZipFile.(ZipFile.java:131) [junit] at org.apache.tools.ant.AntClassLoader.getResourceURL(AntClassLoader.java:1028) [junit] at org.apache.tools.ant.AntClassLoader$ResourceEnumeration.findNextResource(AntClassLoader.java:147) [junit] at org.apache.tools.ant.AntClassLoader$ResourceEnumeration.nextElement(AntClassLoader.java:130) [junit] at org.apache.tools.ant.util.CollectionUtils$CompoundEnumeration.nextElement(CollectionUtils.java:198) [junit] at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:43) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.checkForkedPath(JUnitTask.java:1128) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.executeAsForked(JUnitTask.java:1013) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.execute(JUnitTask.java:834) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.executeOrQueue(JUnitTask.java:1785) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.execute(JUnitTask.java:785) [junit] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) [junit] at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [junit] at java.lang.reflect.Method.invoke(Method.java:597) [junit] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [junit] at org.apache.tools.ant.Task.perform(Task.java:348) [junit] at org.apache.tools.ant.Target.execute(Target.java:357) [junit] at org.apache.tools.ant.Target.performTasks(Target.java:385) [junit] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1337) [junit] at org.apache.tools.ant.Project.executeTarget(Project.java:1306) [junit] at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) [junit] at org.apache.tools.ant.Project.executeTargets(Project.java:1189) [junit] at org.apache.tools.ant.Main.runBuild(Main.java:758) [junit] at org.apache.tools.ant.Main.startAnt(Main.java:217) [junit] at org.apache.tools.ant.launch.Launcher.run(Launcher.java:257) [junit] at org.apache.tools.ant.launch.Launcher.main(Launcher.java:104) my test running task is as follows: <target name="run-junit-tests" depends="compile-tests,clean-results"> <mkdir dir="${test.results.dir}"/> <junit failureproperty="tests.failed" fork="true" showoutput="yes" includeantruntime="false"> <classpath refid="test.run.path" /> <formatter type="xml" /> <test name="project.AllTests" todir="${basedir}/test-results" /> </junit> <fail if="tests.failed" message="Unit tests failed"/> </target> I've verified that the classpath contains the following as well as all of the program code and libraries: ant-junit.jar ant-launcher.jar ant.jar easymock.jar easymockclassextension.jar junit-4.4.jar I've tried debugging to find out which ZipFile it is trying to open with no luck, I've tried toggling includeantruntime and fork and i've tried running ant with ant -lib test/libs where test/libs contains the ant and junit libraries. Any info about what causes this exception or how you've configured ant to successfully run unit tests is gratefully received. ant 1.7.1 (ubuntu), java 1.6.0_10, junit 4.4 Thanks. Update - Fixed Found my problem. I had included my classes directory in my path using a fileset as opposed to a pathelement this was causing .class files to be opened as ZipFiles which of course threw an exception.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >