Search Results

Search found 35007 results on 1401 pages for 'test management'.

Page 336/1401 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • Why are my functional tests failing?

    - by Mongus Pong
    I have generated some scaffolding for my rails app. I am running the generated tests and they are failing. for example test "should create area" do assert_difference('Area.count') do post :create, :area => { :name => 'area1' } end assert_redirected_to area_path(assigns(:area)) end This test is failing saying that : 1) Failure: test_should_create_area(AreasControllerTest) [/test/functional/areas_controller_test.rb:16]: "Area.count" didn't change by 1. <3 expected but was <2. There is only one field in the model : name. I am populating this so it cant be because I am failing to populate the only field. I can run the site and create an area with the name 'area1'. So reality is succeeding, but the test is failing. I cant ask why its failing, because Im sure theres not enough information here for anyone here to know why. Im just stuck at knowing what avenues to go down to work out why the test is failing. Even putting puts into the code dont print out... What steps can I take to track this down?

    Read the article

  • Exception when deploying a JSR 286 portlet into WebLogic+WebCenter 11g

    - by Rambaldi
    I get the following exception when deploying a JSR 286 portlet into Oracle WebLogic Server 11g (to deploy it later in Oracle WebCenter 11g): <19-ene-2010 13H32' CET> <Error> <oracle.portlet.server.containerimpl.PortletApplicationImpl> <BEA-000000> <Error al procesar el archivo "/WEB-INF/portlet.xml" en la lÝnea 6 columna 68. org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'portlet-app' The error message is in spanish. It means: "Error processing the file "/WEB-INF/portlet.xml at line 6 column 68" The portlet.xml of my portlet seems to be correct and I've deployed it in other portal servers. So I don't understand the error message. This is the portlet.xml of my portlet (eclipse XML validator said it was a valid XML) <?xml version="1.0" encoding="UTF-8"?> <portlet-app version="2.0" xmlns="http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd" xmlns:dnd="http://www.denodo.com/widget/portlet/portletjsr286"> <portlet> <description>Test Inter Portlet Communication (JSR286)</description> <portlet-name>Test IPC</portlet-name> <display-name>Test IPC</display-name> <portlet-class>com.denodo.ipc.TestIPCPortlet</portlet-class> <supports> <mime-type>text/html</mime-type> <portlet-mode>VIEW</portlet-mode> </supports> <supported-locale>en</supported-locale> <resource-bundle>PortletMessages</resource-bundle> <portlet-info> <title>Test IPC</title> <short-title>Test IPC</short-title> <keywords>Test IPC,Denodo</keywords> </portlet-info> </portlet> </portlet-app> How do I deploy my portlet I convert my portlet into to a WSRP portlet by executing java -jar wsrp-predeploy.jar source EAR target EAR as explained in http://download.oracle.com/docs/cd/E12839_01/webcenter.1111/e12405/wcadm_portlet_prod.htm#CHDECJHI) I try to deploy it into WebLogic with the WebLogic Console and I get this exception. My Environment WebCenter Suite (11.1.1.2.0) + WebLogic Server (10.3.2) downloaded from the oracle.com. Default configuration S.O: Windows XP SP3 Thanks in advance for your time.

    Read the article

  • Facebooker Causing Problems with Rails Integration Testing

    - by Eric Lubow
    I am (finally) attempting to write some integration tests for my application (because every deploy is getting scarier). Since testing is a horribly documented feature of Rails, this was the best I could get going with shoulda. class DeleteBusinessTest < ActionController::IntegrationTest context "log skydiver in and" do setup do @skydiver = Factory( :skydiver ) @skydiver_session = SkydiverSession.create(@skydiver) @biz = Factory( :business, :ownership = Factory(:ownership, :skydiver = @skydiver )) end context "delete business" do setup do @skydiver_session = SkydiverSession.find post '/businesses/destroy', :id = @biz.id end should_redirect_to('businesses_path()'){businesses_path()} end end end In theory, this test seems like it should pass. My factories seem like they are pushing the right data in: Factory.define :skydiver do |s| s.sequence(:login) { |n| "test#{n}" } s.sequence(:email) { |n| "test#{n}@example.com" } s.crypted_password '1805986f044ced38691118acfb26a6d6d49be0d0' s.password 'secret' s.password_confirmation { |u| u.password } s.salt 'aowgeUne1R4-F6FFC1ad' s.firstname 'Test' s.lastname 'Salt' s.nickname 'Mr. Password Testy' s.facebook_user_id '507743444' end The problem I am getting seems to be from Facebooker only seems to happen on login attempts. When the test runs, I am getting the error: The error occurred while evaluating nil.set_facebook_session. I believe that error is to be expected in a certain sense since I am not using Facebook here for this session. Can anyone provide any insight as to how to either get around this or at least help me out with what is going wrong?

    Read the article

  • One row is skipped each time the program scans a matrix from file !

    - by ZaZu
    Hello there, I had this code working yesterday, but it seems like I edited it a bit and lost the working version. I cant get this to work anymore. I basically want to scan a matrix from a .txt file. But each time it scans the first row, the second one is skipped, and it reads the third instead :( Here is my code : for(i=0;i<=test->rowmat1;i++){ for(j=0;j<=test->colmat1;j++){ fscanf(fin,"%f\t",&test->mat[i][j]); } fscanf(fin,"%*[^\n]",&test->mat[i][j]); } For example, for a matrix of : 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00 11.00 12.00 If I extract 3 rows and 3 cols, I get : 1.00 2.00 3.00 7.00 8.00 9.00 Then fails, it wants to skip over the second line but there is nothing after 10 11 12 Why did it stop working ? What do I have wrong ? Please help, Thanks in advance.

    Read the article

  • How to run nosetests from netbeans?

    - by Chris089
    I recently started using netbeans for python development and really like it. However, the test runner in netbeans does not run my doctests, nor does it run my test functions written using nose. I always have to switch to the shell and run 'setup.py test' or 'nosetests' manually. Is there a way to integrate this into netbenas 6.8?

    Read the article

  • WCF Service Throttling

    - by Mubashar Ahmad
    Dear All I have a WCF Service Deployed in a Console App with BasicHTTPBinding and SSL enabled on port using NetSH command and more over following attribute is set as well. [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] And also i have set the Throttling behavior as <serviceThrottling maxConcurrentCalls="2147483647" maxConcurrentSessions="2147483647" maxConcurrentInstances="2147483647" /> On the other hand i have created a Test Client(for load test) that initiates multiple clients simultaneously(multiple threads) and performs transactions on server. everything seems fine and working properly but on server the CPU utilization is doesn't grow so i added some logging to view the number of concurrent calls to the server and found that its never went over 6. i have reviewed the performance counter logging code more than twice and it seems fine to me. So i want to ask where is the problem in this situation and one more thing i haven't specified any kind of ContextMode or ConcurrencyMode yet. After this Post I noticed that whenever i start another Intance of Test Client my concurrent Server Calls counter increase to 2 like if i am running only 1 instance the maximum Concurrent Rcvd Calls will be 2 and if there are two instance the same value goes to 4 and so on. Is there any limit of Number of WCF Calls from once process? Looking for help Mubashar *Added on 17-March******************* Today i ran another test with one test client(with 50 concurrent users) on the same machine on which the server is running this time i am getting exact result what i wanted it to show i.e. Maximum concurrent Calls Rcvd by Server = 50 but i need to do it the same on others machines as well. Can anybody help me on this.

    Read the article

  • scalacheck/scalatest not found: how to add it in sbt/scala?

    - by Pavel Reich
    I've installed typesafe-stack from http://typesafe.com/stack/download on my ubuntu12, than I created a play project (g8 typesafehub/play-scala) and now I want to add scalatest or scalacheck to my project. So my_app/project/plugins.sbt has the following lines: // The Typesafe repository resolvers += "Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/" // Use the Play sbt plugin for Play projects addSbtPlugin("play" % "sbt-plugin" % "2.0.1") Then I added scalatest using addSbtPlugin: addSbtPlugin("org.scalatest" %% "scalatest" % "2.0.M1" % "test") and now it fails with the following message when I run 'sbt test' [info] Resolving org.scalatest#scalatest;2.0.M1 ... [warn] module not found: org.scalatest#scalatest;2.0.M1 [warn] ==== typesafe-ivy-releases: tried [warn] http://repo.typesafe.com/typesafe/ivy-releases/org.scalatest/scalatest/scala_2.9.1/sbt_0.11.3/2.0.M1/ivys/ivy.xml [warn] ==== local: tried [warn] ~/.ivy2/local/org.scalatest/scalatest/scala_2.9.1/sbt_0.11.3/2.0.M1/ivys/ivy.xml [warn] ==== Typesafe repository: tried [warn] http://repo.typesafe.com/typesafe/releases/org/scalatest/scalatest_2.9.1_0.11.3/2.0.M1/scalatest-2.0.M1.pom [warn] ==== typesafe-ivy-releases: tried [warn] http://repo.typesafe.com/typesafe/ivy- releases/org.scalatest/scalatest/scala_2.9.1/sbt_0.11.3/2.0.M1/ivys/ivy.xml [warn] ==== public: tried [warn] http://repo1.maven.org/maven2/org/scalatest/scalatest_2.9.1_0.11.3/2.0.M1/scalatest-2.0.M1.pom What I don't understand: why does it use this http://repo.typesafe.com/typesafe/releases/org/scalatest/scalatest_2.9.1_0.11.3/2.0.M1/scalatest-2.0.M1.pom URL instead of the real one http://repo.typesafe.com/typesafe/releases/org/scalatest/scalatest_2.9.1/2.0.M1/scalatest_2.9.1-2.0.M1.pom? Quite the same problem I have with scalacheck: it also tries to download using sbt-version specific artifactId whereas the repository has only scala-version specific. What am I doing wrong? I understand there must be a switch in sbt somewhere, not to use sbt-version as part of the artifact URL? I also tried using this in my plugins.sbt libraryDependencies += "org.scalatest" %% "scalatest" % "2.0.M1" % "test" but looks like it is completely ignored by sbt and scalatest.jar hasn't appeared in the classpath: my_app/test/AppTest.scala:1: object scalatest is not a member of package org [error] import org.scalatest.FunSuite because the output of sbt clean && sbt test has lots of Resolving org.easytesting#fest-util;1.1.6 or just another library, but nothing about scalatest. I use scala 2.9.1 and sbt 0.11.3, trying to use scalatest 2.0.M1 and 1.8; scalacheck: resolvers ++= Seq( "snapshots" at "http://oss.sonatype.org/content/repositories/snapshots", "releases" at "http://oss.sonatype.org/content/repositories/releases" ) libraryDependencies ++= Seq( "org.scalacheck" %% "scalacheck" % "1.9" % "test" ) With the same outcome, i.e. it uses the sbtVersion specific POM URL, which doesn't exist. What am I doing wrong? Thanks.

    Read the article

  • Using reflection to retrieve constructor used to instantiate attribute

    - by summatix
    How can I retrieve information about how an attribute was instantiated? Consider I have the following class definitions: [AttributeUsage(AttributeTargets.Class)] public class ExampleAttribute : Attribute { public ExampleAttribute(string value) { Value = value; } public string Value { get; private set; } } [ExampleAttribute("test")] public class Test { } The new .NET 4.0 MemberInfo.GetCustomAttributesData method: foreach (var attribute in typeof(Test).GetCustomAttributesData()) { Console.WriteLine(attribute); } outputs [Example.ExampleAttribute("test")]. Is there another way to retrieve this same information, preferably using the MemberInfo.GetCustomAttributes method?

    Read the article

  • jQuery datepicker minDate variable

    - by d3020
    I'm usung the jQuery datepicker. In my "EndDate" textbox I'd like to use the date selected from the the "StartDate" textbox + 1. How do I do this? I tried this but didn't work. In my start date code I had... test = $(this).datepicker('getDate'); testm = new Date(test.getTime()); testm.setDate(testm.getDate() + 1); Then in my end date code I had... minDate: testm, but the end date still made all the days for the month available. Thanks. Edit. I'm curious as to why this doesn't work. In my start date datepicker I have this.. onSelect: function (dateText, inst) test = dateText Why can't I come down into my end date datepicker and say, minDate: test?

    Read the article

  • Automating the Choose a digital certificate dialog

    - by MoMo
    I am using WatiN (2.0.10.928) with C# and Visual Studio 2008 to test a SSL secured website that requires a certificate. When you navigate to the homepage a "Choose a digital certificate" dialog is displayed and requires that you select a valid certificate and click the 'OK' button. I'm looking for a way to automate the certificate selection so that every time a new test or fixture is executed (and my browser restarts) I don't have to manually interfere with the automated test and select the certificate. I've tried using various WatiN Dialog Handler classes and even looked into using the Win32 API to automate this but haven't had much luck. I finally found a solution but its adds another dependency to the solution (a third party library called AutoIT). Since this solution isn't ideal but does work and is the best I could find, I will post the solution and mark it as the answer but I am still looking for an 'out of the box' WatiN solution that is more consistent with the rest of my code and test fixtures. Thanks for your responses!

    Read the article

  • Java NoClassDefFoundError when calling own class from instrumented method

    - by lethal_possum
    Hello, I am working on a kit of simple Java agents to help me (and hopefully others) troubleshoot Java applications. One of the agents I would like to create instruments the JComponent.getToolTipText() method to quickly identify any GUI class by just hovering the mouse cursor over it. You can find the code of my transformer and the rest of the project here: http://sfn.cvs.sourceforge.net/viewvc/sfn/core/src/main/java/org/leplus/sfn/transformer/JComponentTransformer.java?view=markup I launch my test GUI with the agent attached as follow: $ java -javaagent:target/jars/sfn-0.1-agent.jar=JComponent -cp lib/jars/bcel-5.2.jar:target/jars/sfn-0.1-test.jar:target/jars/sfn-0.1-agent.jar org.leplus.sfn.test.Main sfn-0.1-agent.jar contains the org.leplus.sfn.transformer.JComponentTransformer class. sfn-0.1-test.jar contains the org.leplus.sfn.test.Main class. Here is what the application prints when I launch it and I put the mouse over it: Loading agent: JComponent Instrumentation ready! Exception in thread "AWT-EventQueue-0" java.lang.NoClassDefFoundError: org/leplus/sfn/tracer/ComponentTracer at javax.swing.JComponent.getToolTipText(JComponent.java) at javax.swing.ToolTipManager$insideTimerAction.actionPerformed(ToolTipManager.java:662) ... What is surprising to me is that if I change my transformer to call any class from the JRE, it works. But it doesn't work when I call my own class org.leplus.sfn.tracer.ComponentTracer. My first guess was a classpath issue but the ComponentTracer is both in the classpath and in the agent's jar. So I am lost. If any of you see where I am missing something. Cheers, Tom

    Read the article

  • FlexUnit nested async tests

    - by sharvey
    I'm trying to test some async functionality in flex 4. My test has two stages : var loader:MySuperLoader = new MySuperLoader() loader.load('foo.swf'); loader.addEventListener(Event.COMPLETE, Async.asyncHandler(this, function(e:Event):void { loader.removeEventListener(Event.COMPLETE, arguments.callee); var foo:* = loader.content; loader.load('bar.swf'); loader.addEventListener(Event.COMPLETE, Async.asyncHandler(this, function(e:Event):void { /* This call to asyncHandler generates the error */ }, 5000)); }, 5000)); The second call to asyncHandler generates an error saying : Error: Cannot add asynchronous functionality to methods defined by Test,Before or After that are not marked async Is there a way to test such funcitonality?

    Read the article

  • key value coding-compliant for NSObject class?

    - by 4thSpace
    I've created a singleton class that loads a plist. I keep getting this error when I try to set a value: '[ setValue:forUndefinedKey:]: this class is not key value coding-compliant for the key test.' I have one key in the plist file. The key is named "test" and has no value associated with it. I set the value like this: [[PlistManager sharedManager].plist setValue:@"the title value" forKey:@"test"]; I look at the set plist dictionary and see this from within PlistManager: po self.plistDictionary { test = ""; } I get the error just as I'm leaving PlistManager in the debugger. PlistManager is of type NSObject. So no xibs. Any ideas on what I need to do?

    Read the article

  • Using Tweepy API behind proxy

    - by user1505819
    I have a using Tweepy, a python wrapper for Twitter.I am writing a small GUI application in Python which updates my twitter account. Currently, I am just testing if the I can get connected to Twitter, hence used test() call. I am behind Squid Proxy server.What changes should I make to snippet so that I should get my work done. Setting http_proxy in bash shell did not help me. def printTweet(self): #extract tweet string tweet_str = str(self.ui.tweet_txt.toPlainText()) ; #tweet string extracted. self.ui.tweet_txt.clear() ; self.tweet_on_twitter(str); def tweet_on_twitter(self,my_tweet) : auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET); auth.set_access_token(ACCESS_KEY, ACCESS_SECRET) ; api = tweepy.API(auth) ; if api.test() : print 'Test successful' ; else : print 'Test unsuccessful';

    Read the article

  • return a js file from asp.net mvc controller

    - by Erwin
    Hi all fellow programmer I'd like to have a separate js file for each MVC set I have in my application /Controllers/ -TestController.cs /Models/ /Views/ /Test/ -Index.aspx -script.js And I'd like to include the js in the Index.aspx by <script type="text/javascript" src="<%=UriHelper.GetBaseUrl()%>/Test/Js"></script> Or to put it easier, when I call http://localhost/Test/Js in the browser, it will show me the script.js file How to write the Action in the Controller? I know that this must be done with return File method, but I haven't successfully create the method :(

    Read the article

  • Administrator's shortcut to batch file with double quoted parameters

    - by XXB
    Take an excruciatingly simple batch file: echo hi pause Save that as test.bat. Now, make a shortcut to test.bat. The shortcut runs the batch file, which prints "hi" and then waits for a keypress as expected. Now, add some argument to the target of the shortcut. Now you have a shortcut to: %path%\test.bat some args The shortcut runs the batch file as before. Now, run the shortcut as administrator. (This is on Windows 7 by the way.) You can use either right-click - Run as Administrator, or go to the shortcut's properties and check the box in the advanced section. Tell UAC that it's okay and once again the shortcut runs the batch file as expected. Now, change the arguments in the target of the shortcut to add double quotes: %path%\test.bat "some args" Now try the shortcut as administrator. It doesn't work this time! A command window pops up and and disappears too fast to see any error. I tried adding test.log 2&1 to the shortcut, but no log is created in this case. Try running the same shortcut (with the double quotes) but not as Administrator. It runs the batch file fine. So, it seems the behavior is not because of the double quoted parameters, and it's not because it's run as Administrator. It's some weird combination of the two. I also tried running the same command from an administrator's command window. This ran the batch file as expected without error. Running the shortcut from the command window spawned a new command window which flashed and went away. So apparently the issue is caused by a combination of administrator, the shortcut, and the double quotes. I'm totally stumped, does anyone have any idea what's going on?

    Read the article

  • JSF 2.0: java based custom component + html table + facelets = data model not updated

    - by mikic
    Hi, I'm having problems getting the data model of a HtmlDataTable to be correctly updated by JSF 2.0 and Facelets. I have created a custom Java-based component that extends HtmlDataTable and dynamically adds columns in the encodeBegin method. @Override public void encodeBegin(FacesContext context) throws IOException { if (this.findComponent("c0") == null) { for (int i = 0; i < 3; i++) { HtmlColumn myNewCol = new HtmlColumn(); myNewCol.setId("c" + i); HtmlInputText myNewText = new HtmlInputText(); myNewText.setId("t" + i); myNewText.setValue("#{row[" + i + "]}"); myNewCol.getChildren().add(myNewText); this.getChildren().add(myNewCol); } } super.encodeBegin(context); } My test page contains the following <h:form id="fromtb"> <test:MatrixTest id="tb" var="row" value="#{MyManagedBean.model}"> </test:MatrixTest> <h:commandButton id="btn" value="Set" action="#{MyManagedBean.mergeInput}"/> </h:form> <h:outputText id="mergedInput" value="#{MyManagedBean.mergedInput}"/> My managed bean class contains the following @ManagedBean(name="MyManagedBean") @SessionScoped public class MyManagedBean { private List model = null; private String mergedInput = null; public MyManagedBean() { model = new ArrayList(); List myFirst = new ArrayList(); myFirst.add(""); myFirst.add(""); myFirst.add(""); model.add(myFirst); List mySecond = new ArrayList(); mySecond.add(""); mySecond.add(""); mySecond.add(""); model.add(mySecond); } public String mergeInput() { StringBuffer myMergedInput = new StringBuffer(); for (Object object : model) { myMergedInput.append(object); } setMergedInput(myMergedInput.toString()); return null; } public List getModel() { return model; } public void setModel(List model) { this.model = model; } public String getMergedInput() { return mergedInput; } public void setMergedInput(String mergedInput) { this.mergedInput = mergedInput; } When invoked, the page is correctly rendered with a table made of 3 columns (added at runtime) and 2 rows (as my data model has 2 rows). However when the user enter some data in the input fields and then click the submit button, the model is not correctly updated and therefore the mergeInput() method creates a sequence of empty strings which is rendered on the same page. I have added some logging to the decode() method of my custom component and I can see that the parameters entered by the user are being posted back with the request, however these parameters are not used to update the data model. If I update the encodeBegin() method of my custom component as follow @Override public void encodeBegin(FacesContext context) throws IOException { super.encodeBegin(context); } and I update the test page as follow <test:MatrixTest id="tb" var="row" value="#{MyManagedBean.model}"> <h:column id="c0"><h:inputText id="t0" value="#{row[0]}"/></h:column> <h:column id="c1"><h:inputText id="t1" value="#{row[1]}"/></h:column> <h:column id="c2"><h:inputText id="t2" value="#{row[2]}"/></h:column> </test:MatrixTest> the page is correctly rendered and this time when the user enters data and submits the form, the underlying data model is correctly updated and the mergeInput() method creates a sequence of strings with the user data. Why does the test case with columns declared in the facelet page works correctly (ie the data model is correctly updated by JSF) where the same does not happen when the columns are created at runtime using the encodeBegin() method? Is there any method I need to invoke or interface I need to extend in order to ensure the data model is correctly updated? I am using this test case to address the issue that is appearing in a much more complex component, therefore I can't achieve the same functionality using a facelet composite component. Please note that this has been done using NetBeans 6.8, JRE 1.6.0u18, GlassFish 3.0. Thanks for your help.

    Read the article

  • How do you run PartCover with spaces in the path?

    - by nportelli
    I have a msbuild file that I'm trying to run from Hudson CI. It outputs like this "C:\Program Files\Gubka Bob\PartCover .NET 2\PartCover.exe" --target "C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe" --target-args "/noisolation" "/testcontainer:C:\CI\Hudson\jobs\Video Raffle\workspace\Source\VideoRaffleCaller\Source\VideoRaffleCaller.Test.Unit\bin\Debug\VideoRaffleCaller.Test.Unit.dll" --include "[VideoRaffleCaller*]*" --output "Coverage\partcover.xml" I get this error Invalid switch "raffle\workspace\source\videorafflecaller\source\videorafflecall er.test.unit\bin\debug\videorafflecaller.test.unit.dll". For switch syntax, type "MSTest /help" WTF? Looks like PartCover doesn't handle spaces in the --target-args well. Or am I missing some quotes somewhere? Has anyone gotten something like to to work?

    Read the article

  • Testing C++ program with Testing classes over normally used classes

    - by paultop6
    Hi Guys, This will probably be a bot of a waffly question but ill try my best. I have a simple c++ program that i need to build testing for. I have 2 Classes i use besides the one i actually am using, these are called WebServer and BusinessLogicLayer. To test my own code i have made my own versions of these classes that feed dummy data to my class to test it functionality. I need to know a way of somehow, via a makefile for instance, how to tell the source code to use the test classes over the normally used classes. The test classes are in a different "tester" c++ file, and the tester c++ file also has its own header file. Regards Paul P.S. This is probably a badly worded question, but i dont know any better way to put my question.

    Read the article

  • how to handle delete by illegal address

    - by Davit Siradeghyan
    Suppose we have a situation like this. How to handle this problem? How to protect code from crashes? I know about and use boost smart pointers. But what to do if we have this situation. struct Test { int a; int b; int c; }; Test global; int main() { Test *p = new Test; p->a = 1; p->b = 2; p->c = 3; p = &global; delete p; return 0; }

    Read the article

  • .Net MEF newbie question

    - by steve.macdonald
    I am missing something basic when it comes to using MEF. I got it working using samples and a simple console app where everything is in the same assembly. Then I put some imports and exports in a separate project which contains various entities. I want to use these entities in an MS Test, but the composition is never actually done. When I move the composition stuff into the constructor of an entity in question it works, but that's obviously wrong. Does GetExecutingAssembly only "see" the test process? What am I missing re containers? I tried putting the container in a Using in the test without luck. The MEF docs are still very scant and I can't find a simple example of an application (or MS Test) which uses entities from a different project...

    Read the article

  • Use the Django ORM in a standalone script (again)

    - by Rishabh Manocha
    I'm trying to use the Django ORM in some standalone screen scraping scripts. I know this question has been asked before, but I'm unable to figure out a good solution for my particular problem. I have a Django project with defined models. What I would like to do is use these models and the ORM in my scraping script. My directory structure is something like this: project scrape #scraping scripts ... test.py web django_project settings.py ... #Django files I tried doing the following in project/scrape/test.py: print os.path.join(os.path.abspath('..'), 'web', 'django_project') sys.path.append(os.path.join(os.path.abspath('..'), 'web', 'django_project')) print sys.path print "-------" os.environ['DJANGO_SETTINGS_MODULE'] = 'django_project.settings' #print os.environ from django_project.myapp.models import MyModel print MyModel.objects.count() However, I get an ImportError when I try to run test.py: Traceback (most recent call last): File "test.py", line 12, in <module> from django_project.myapp.models import MyModel ImportError: No module named django_project.myapp.models One solution I found around this problem is to create a symbolic link to ../web/govcheck in the scrape folder: :scrape rmanocha$ ln -s ../web/govcheck ./govcheck With this, I can then run test.py just fine. However, this seems like a hack, and more importantly, is not very portable (I will have to create this symbolic link everywhere I run this code). So, I was wondering if anyone has any better solutions for my problem?

    Read the article

  • Maven: compile aspectj project containing Java 1.6 source

    - by gmale
    What I want to do is fairly easy. Or so you would think. However, nothing is working properly. Requirement: Using maven, compile Java 1.6 project using AspectJ compiler. Note: Our code cannot compile with javac. That is, it fails compilation if aspects are not woven in (because we have aspects that soften exceptions). Questions (based on failed attempts below): Either 1) How do you get maven to run the aspectj:compile goal directly, without ever running compile:compile? 2) How do you specify a custom compilerId that points to your own ajc compiler? Thanks for any and all suggestions. These are the things I've tried that have let to my problem/questions: Attempt 1 (fail): Specify aspectJ as the compiler for the maven-compiler-plugin: org.apache.maven.plugins maven-compiler-plugin 2.2 1.6 1.6 aspectj org.codehaus.plexus plexus-compiler-aspectj 1.8 This fails with the error: org.codehaus.plexus.compiler.CompilerException: The source version was not recognized: 1.6 No matter what version of the plexus compiler I use (1.8, 1.6, 1.3, etc), this doesn't work. I actually read through the source code and found that this compiler does not like source code above Java 1.5. Attempt 2 (fail): Use the aspectJ-maven-plugin attached to the compile and test-compile goals: org.codehaus.mojo aspectj-maven-plugin 1.3 1.6 1.6 compile test-compile This fails when running either: mvn clean test-compile mvn clean compile because it attempts to execute compile:compile before running aspectj:compile. As noted above, our code doesn't compile with javac--the aspects are required. So mvn would need to skip the compile:compile goal altogether and run only aspectj:compile. Attempt 3 (works but unnacceptable): Use the same configuration above but instead run: mvn clean aspectj:compile This works, in that it builds successfully but it's unacceptable in that we need to be able to run the compile goal and the test-compile goal directly (m2eclipse auto-build depends on those goals). Moreover, running it this way would require that we spell out every goal we want along the way (for instance, we need resources distributed and tests to be run and test resources deployed, etc)

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >