Search Results

Search found 10078 results on 404 pages for 'smoke testing'.

Page 48/404 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Access a PLESK website before propagation?

    - by RCNeil
    My web host uses Plesk and I want to know if there is anyway to access and view a website (with PHP and other processes being functional) without propagation of the domain name? I have found countless forums on this but they are all pretty old (circa 01-04) and involve either tricking your localhost or SSH commands and some even result in terrible security risks. I would like to access a web page directory through a browser and see it's contents while having the PHP processes carry out... before I propagate it's potential domain name. People claim this is pointless but during a site migration why on earth would you not test a site before propagating it? I'm looking for something similar to what cPanel offers i.e. http://IP.ADDRESS./~mydomain.com The only solution I could think of is storing the site in a new directory of an already functional site and then setting up databases and testing the site once it's complete. Once tested and working I should be easily be able to migrate the files to the "new" domain name's root directory and just setup a new databases and then propagate the domain name. I can't believe that Plesk V10+ still does not have a site preview method that includes PHP, JS, and Flash ability.

    Read the article

  • "TDD is about design, not verification"; concretely, what does that mean?

    - by sigo
    I've been wondering about this. What do we exactly mean by design and verification. Should I just apply TDD to make sure my code is SOLID and not check if it's external behaviour is correct? Should I use BDD for verifying the behaviour is correct? Where I get confused also is regarding TDD code Katas, to me they looked like more about verification than design; shouldn't they be called BDD Katas instead of TDD Katas? I reckon that for example the Uncle Bob bowling Kata leads in the end to a simple and nice internal design but I felt that most of the process was centred more around verification than design. Design seemed to be a side effect of testing the external behaviour incrementally. I didn't feel so much that we were focusing most of our efforts on design but more on verification. While normally we are told the contrary, that in TDD, verification is a side effect, design is the main purpose. So my question is what should I focus on exactly, when I do TDD: SOLID, external API usability, or something else? And how can I do that without being focused on verification? What do you guys focus your energy on when you are practising TDD?

    Read the article

  • Please recommend the best tools to build a test plan management tool

    - by fzkl
    I have mostly worked on hardware testing in my professional career and would like to get onto the software development side. I thought working on a practically usable project will help motivate me and help acquire some skills. I have decided to build a test plan management tool for the QA team I work in (We use excel sheets!). The test plan management tool should be browser based and should support this: There would be many test plans, each test plan having test sets, test sets having test cases and test cases having instructions, attachments and Pass/fail status marking and bug info in case of failure. It should also have an export to excel option. I have a visual picture of the tool I am looking to build but I don't have enough experience to figure our where to start. My current programming skills are limited to C and shell programming and I want to pick up python. What tools (programming language, database and anything else?) would you recommend for me to get this done? Also what are the key concepts in the recommended programming language that I should focus on to build a browser based tool like this?

    Read the article

  • How to implement isValid correctly?

    - by Songo
    I'm trying to provide a mechanism for validating my object like this: class SomeObject { private $_inputString; private $_errors=array(); public function __construct($inputString) { $this->_inputString = $inputString; } public function getErrors() { return $this->_errors; } public function isValid() { $isValid = preg_match("/Some regular expression here/", $this->_inputString); if($isValid==0){ $this->_errors[]= 'Error was found in the input'; } return $isValid==1; } } Then when I'm testing my code I'm doing it like this: $obj = new SomeObject('an INVALID input string'); $isValid = $obj->isValid(); $errors=$obj->getErrors(); $this->assertFalse($isValid); $this->assertNotEmpty($errors); Now the test passes correctly, but I noticed a design problem here. What if the user called $obj->getErrors() before calling $obj->isValid()? The test will fail because the user has to validate the object first before checking the error resulting from validation. I think this way the user depends on a sequence of action to work properly which I think is a bad thing because it exposes the internal behaviour of the class. How do I solve this problem? Should I tell the user explicitly to validate first? Where do I mention that? Should I change the way I validate? Is there a better solution for this? UPDATE: I'm still developing the class so changes are easy and renaming functions and refactoring them is possible.

    Read the article

  • How to use lists in equivalence partitioning?

    - by KhDonen
    I have read that equivalence partitioning can be used typically for intervals or lists, e.g. I assume it can be used for every set of inputs. Anyway if the requirement says that allowed colors are (RED,BLUE,BLACK, GREEN), I cannot treat them like a list, right? I mean, testing one of them would not be enough because developers most likely used some switch-case and thus it is not real "set" where one could represent also the others. So how it is meant with lists? Also what is not that clear to me, I do not think it is always possible to do the initial partioning and then design the test cases. What about checking two lines intersection: Y=MX+C. (two inputs) 1) The lines are paraller. M1=M1 but C1 must be different from C2. 2) Lines are intersecting. M1 must be different from M2. 3) Coincident. The are the same. How can I use partitioning here? THis is actually taken from a book and it says that these sets are eq.classes.

    Read the article

  • Tips for Making this Code Testable [migrated]

    - by Jesse Bunch
    So I'm writing an abstraction layer that wraps a telephony RESTful service for sending text messages and making phone calls. I should build this in such a way that the low-level provider, in this case Twilio, can be easily swapped without having to re-code the higher level interactions. I'm using a package that is pre-built for Twilio and so I'm thinking that I need to create a wrapper interface to standardize the interaction between the Twilio service package and my application. Let us pretend that I cannot modify this pre-built package. Here is what I have so far (in PHP): <?php namespace Telephony; class Provider_Twilio implements Provider_Interface { public function send_sms(Provider_Request_SMS $request) { if (!$request->is_valid()) throw new Provider_Exception_InvalidRequest(); $sms = \Twilio\Twilio::request('SmsMessage'); $response = $sms->create(array( 'To' => $request->to, 'From' => $request->from, 'Body' => $request->body )); if ($this->_did_request_fail($response)) { throw new Provider_Exception_RequestFailed($response->message); } $response = new Provider_Response_SMS(TRUE); return $response; } private function _did_request_fail($api_response) { return isset($api_response->status); } } So the idea is that I can write another file like this for any other telephony service provided that it implements Provider_Interface making them swappable. Here are my questions: First off, do you think this is a good design? How could it be improved? Second, I'm having a hard time testing this because I need to mock out the Twilio package so that I'm not actually depending on Twilio's API for my tests to pass or fail. Do you see any strategy for mocking this out? Thanks in advance for any advice!

    Read the article

  • How to test chrome extensions?

    - by swampsjohn
    Is there a good way to do this? I'm writing an extension that interacts with a website as a content script and saves data using localstorage. Are there any tools, frameworks, etc. that I can use to test this behavior? I realize there are some generic tools for testing javascript, but are those sufficiently power to test an extension? Unit testing is most important, but I'm also interested in other types of testing (such as integration testing).

    Read the article

  • Testing HTTP status codes

    - by amusero
    I'm running an Apache Tomcat server. Making some security testing I'd noticed than my server is returning a 200 HTTP status code of the default error page when I try to access to a non-existent element instead of return a 404 status code and redirect me to the default error page. I suspect that this is not the only fail with this issue. Anyone can suggest me a process to chech the most common HTTP status codes?

    Read the article

  • Plugins stopped working in Chromium in Debian Testing

    - by Jan Hudec
    Short time ago plugins stopped working in chromium. Neither of kpartsplugin, mozplugger nor flashplayer-nonfree seem to work. Neither comes up in chrome://plugins page (only "Chromoting Viewer" does). Was there recently any change that would require reconfiguration? And if, of what? I have Debian Testing (Jessie) amd64, recently updated, with chromium 35.0.1916.114-2, flashplugin-nonfree 1:3.4, kpartsplugin 20120605-1 and mozplugger 1.14.5-2.

    Read the article

  • Library to fake intermittent failures according to tester-defined policy?

    - by crosstalk
    I'm looking for a library that I can use to help mock a program component that works only intermittently - usually, it works fine, but sometimes it fails. For example, suppose I need to read data from a file, and my program has to avoid crashing or hanging when a read fails due to a disk head crash. I'd like to model that by having a mock data reader function that returns mock data 90% of the time, but hangs or returns garbage otherwise. Or, if I'm stress-testing my full program, I could turn on debugging code in my real data reader module to make it return real data 90% of the time and hang otherwise. Now, obviously, in this particular example I could just code up my mock manually to test against a random() routine. However, I was looking for a system that allows implementing any failure policy I want, including: Fail randomly 10% of the time Succeed 10 times, fail 4 times, repeat Fail semi-randomly, such that one failure tends to be followed by a burst of more failures Any policy the tester wants to define Furthermore, I'd like to be able to change the failure policy at runtime, using either code internal to the program under test, or external knobs or switches (though the latter can be implemented with the former). In pig-Java, I'd envision a FailureFaker interface like so: interface FailureFaker { /** Return true if and only if the mocked operation succeeded. Implementors should override this method with versions consistent with their failure policy. */ public boolean attempt(); } And each failure policy would be a class implementing FailureFaker; for example there would be a PatternFailureFaker that would succeed N times, then fail M times, then repeat, and a AlwaysFailFailureFaker that I'd use temporarily when I need to simulate, say, someone removing the external hard drive my data was on. The policy could then be used (and changed) in my mock object code like so: class MyMockComponent { FailureFaker faker; public void doSomething() { if (faker.attempt()) { // ... } else { throw new RuntimeException(); } } void setFailurePolicy (FailureFaker policy) { this.faker = policy; } } Now, this seems like something that would be part of a mocking library, so I wouldn't be surprised if it's been done before. (In fact, I got the idea from Steve Maguire's Writing Solid Code, where he discusses this exact idea on pages 228-231, saying that such facilities were common in Microsoft code of that early-90's era.) However, I'm only familiar with EasyMock and jMockit for Java, and neither AFAIK have this function, or something similar with different syntax. Hence, the question: Do such libraries as I've described above exist? If they do, where have you found them useful? If you haven't found them useful, why not?

    Read the article

  • Is it OK to repeat code for unit tests?

    - by Pete
    I wrote some sorting algorithms for a class assignment and I also wrote a few tests to make sure the algorithms were implemented correctly. My tests are only like 10 lines long and there are 3 of them but only 1 line changes between the 3 so there is a lot of repeated code. Is it better to refactor this code into another method that is then called from each test? Wouldn't I then need to write another test to test the refactoring? Some of the variables can even be moved up to the class level. Should testing classes and methods follow the same rules as regular classes/methods? Here's an example: [TestMethod] public void MergeSortAssertArrayIsSorted() { int[] a = new int[1000]; Random rand = new Random(DateTime.Now.Millisecond); for(int i = 0; i < a.Length; i++) { a[i] = rand.Next(Int16.MaxValue); } int[] b = new int[1000]; a.CopyTo(b, 0); List<int> temp = b.ToList(); temp.Sort(); b = temp.ToArray(); MergeSort merge = new MergeSort(); merge.mergeSort(a, 0, a.Length - 1); CollectionAssert.AreEqual(a, b); } [TestMethod] public void InsertionSortAssertArrayIsSorted() { int[] a = new int[1000]; Random rand = new Random(DateTime.Now.Millisecond); for (int i = 0; i < a.Length; i++) { a[i] = rand.Next(Int16.MaxValue); } int[] b = new int[1000]; a.CopyTo(b, 0); List<int> temp = b.ToList(); temp.Sort(); b = temp.ToArray(); InsertionSort merge = new InsertionSort(); merge.insertionSort(a); CollectionAssert.AreEqual(a, b); }

    Read the article

  • Is it reasonable to insist on reproducing every defect before diagnosing and fixing it?

    - by amphibient
    I work for a software product company. We have large enterprise customers who implement our product and we provide support to them. For example, if there is a defect, we provide patches, etc. In other words, It is a fairly typical setup. Recently, a ticket was issued and assigned to me regarding an exception that a customer found in a log file and that has to do with concurrent database access in a clustered implementation of our product. So the specific configuration of this customer may well be critical in the occurrence of this bug. All we got from the customer was their log file. The approach I proposed to my team was to attempt to reproduce the bug in a similar configuration setup as that of the customer and get a comparable log. However, they disagree with my approach saying that I should not need to reproduce the bug (as that is overly time-consuming and will require simulating a server cluster on VMs) and that I should simply "follow the code" to see where the thread- and/or transaction-unsafe code is and put the change working off of a simple local development, which is not a cluster implementation like the environment from which the occurrence of the bug originates. To me, working out of an abstract blueprint (program code) rather than a concrete, tangible, visible manifestation (runtime reproduction) seems like a difficult working environment (for a person of normal cognitive abilities and attention span), so I wanted to ask a general question: Is it reasonable to insist on reproducing every defect and debug it before diagnosing and fixing it? Or: If I am a senior developer, should I be able to read (multithreaded) code and create a mental picture of what it does in all use case scenarios rather than require to run the application, test different use case scenarios hands on, and step through the code line by line? Or am I a poor developer for demanding that kind of work environment? Is debugging for sissies? In my opinion, any fix submitted in response to an incident ticket should be tested in an environment simulated to be as close to the original environment as possible. How else can you know that it will really remedy the issue? It is like releasing a new model of a vehicle without crash testing it with a dummy to demonstrate that the air bags indeed work. Last but not least, if you agree with me: How should I talk with my team to convince them that my approach is reasonable, conservative and more bulletproof?

    Read the article

  • Speed up MySQL for inserts (for testing purposes)

    - by Alex N
    I have a bit of software that needs to do a lot of INSERTs. In production environment there'll be some serious tweaking and testing and stuff like that, but now when I need to test it I'd like to speed up inserts as much as possible. Hence my question - is there a way to tweak mysql such that it doesn't do much disk I/O but keeps everything in RAM and syncs with disk rarely(like once n-seconds say?)

    Read the article

  • Using NSpec at various architectural layers

    - by nono
    Having read the quick start at nspec.org, I realized that NSpec might be a useful tool in a scenario which was becoming a bit cumbersome with NUnit alone. I'm adding an OAuth (or, DotNetOpenAuth) to a website and quickly made a mess of writing test methods such as [Test] public void UserIsLoggedInLocallyPriorToInvokingExternalLoginAndExternalLoginSucceedsAndExternalProviderIdIsNotAlreadyAssociatedWithUserAccount() { ... } ... and I wound up with maybe a dozen permutations of this theme, for the user already being logged in locally and not locally, the external login succeeding or failing, etc. Not only were the method names unwieldy, but every test needed a setup that contained parts in common with a different set of other tests. I realized that NSpec's incremental setup capabilities would work great for this, and for a while I was trucking a long wonderfully, with code like act = () => { actionResult = controller.ExternalLoginCallback(returnUrl); }; context["The user is already logged in"] = () => { before = () => identity.Setup(x => x.IsAuthenticated).Returns(true); context["The external login succeeds"] = () => { before = () => oauth.Setup(x => x.VerifyAuthentication(It.IsAny<string>())).Returns(new AuthenticationResult(true, providerName, "provideruserid", "username", new Dictionary<string, string>())); context["External login already exists for current user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Should add 'login sucessful' alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("Login successful"); alerts[0].AlertType.should_be(AlertType.Success); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; context["External login already exists for another user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForAnyOtherUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Adds an error alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("The external login you requested is already associated with a different user account"); alerts[0].AlertType.should_be(AlertType.Error); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; This approach seemed to work magnificently until I prepared to write test code for my ApplicationServices layer, to which I delegate viewmodel manipulation from my MVC controllers, and which coordinates the operations of the lower data repository layer: public void CreateUserAccountFromExternalLogin(RegisterExternalLoginModel model) { throw new NotImplementedException(); } public void AssociateExternalLoginWithUser(string userName, string provider, string providerUserId) { throw new NotImplementedException(); } public string GetLocalUserName(string provider, string providerUserId) { throw new NotImplementedException(); } I have no idea what in the world to name the test class, the test methods, or even if I should perhaps include the testing for this layer into the test class from my large code snippet above, so that a single feature or user action could be tested without regard to architectural layering. I can't find any tutorials or blog posts which cover more than simple examples, so I would appreciate any recommendations or pointing in the right direction. I would even welcome "your question is invalid"-type answers as long as some explanation is provided.

    Read the article

  • What are best practices on virtual lab/test bed architecture?

    - by WooYek
    I am currently preparing a new small virtual environment for development and testing with Windows Server + SQL Server + AD + Sharepoint + Exchange + IIS(ASP.NET) + Biztalk + ?, for a small (up to 5) dev team. What are pros and cons on different approaches, eg. splitting up over different machines or packing everything up per machine. I your experience what are the best practices I should follow in terms of architecture and various system/servers placement. What to share and what to split per person. I would like to achieve some flexibility for the dev and testing process (so teammebers would not be steeping on each other's toes) and limit administrative effort needed to propagate settings, integrate work items and revert changes when something breaks up. It's not supposed to be an everyday development working environment, more a tier 2 developer testing environment, and not yet an integration or QA testing environment with formal change process. IMO the two borderline solutions are: creating one all-inclusive machine for each dev team member giving them freedom to manage creating shared environment managed by the one with somehow formalized change request process What golden mean would you recommend, and why?

    Read the article

  • WPF: OnRender and Hit Testing

    - by stefan.at.wpf
    Hello, when using OnRender to draw something on the screen, is there any way to perform Hit Testing on the drawn graphics? Sample Code protected override void OnRender(System.Windows.Media.DrawingContext drawingContext) { base.OnRender(drawingContext); drawingContext.DrawRectangle(Brushes.Black, null, new Rect(50, 50, 100, 100)); } Obviously one has no reference to the drawn Rectangle which would be necessary to perform hit testing or am I wrong about this? I know I can use DrawingVisual, I'm just curious if my understanding is correct, that using OnRender to draw something you can't perform any hit testing on the drawn things?

    Read the article

  • Your thoughts on Best Practices for Scientific Computing?

    - by John Smith
    A recent paper by Wilson et al (2014) pointed out 24 Best Practices for scientific programming. It's worth to have a look. I would like to hear opinions about these points from experienced programmers in scientific data analysis. Do you think these advices are helpful and practical? Or are they good only in an ideal world? Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P (2014) Best Practices for Scientific Computing. PLoS Biol 12:e1001745. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745 Box 1. Summary of Best Practices Write programs for people, not computers. (a) A program should not require its readers to hold more than a handful of facts in memory at once. (b) Make names consistent, distinctive, and meaningful. (c) Make code style and formatting consistent. Let the computer do the work. (a) Make the computer repeat tasks. (b) Save recent commands in a file for re-use. (c) Use a build tool to automate workflows. Make incremental changes. (a) Work in small steps with frequent feedback and course correction. (b) Use a version control system. (c) Put everything that has been created manually in version control. Don’t repeat yourself (or others). (a) Every piece of data must have a single authoritative representation in the system. (b) Modularize code rather than copying and pasting. (c) Re-use code instead of rewriting it. Plan for mistakes. (a) Add assertions to programs to check their operation. (b) Use an off-the-shelf unit testing library. (c) Turn bugs into test cases. (d) Use a symbolic debugger. Optimize software only after it works correctly. (a) Use a profiler to identify bottlenecks. (b) Write code in the highest-level language possible. Document design and purpose, not mechanics. (a) Document interfaces and reasons, not implementations. (b) Refactor code in preference to explaining how it works. (c) Embed the documentation for a piece of software in that software. Collaborate. (a) Use pre-merge code reviews. (b) Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems. (c) Use an issue tracking tool. I'm relatively new to serious programming for scientific data analysis. When I tried to write code for pilot analyses of some of my data last year, I encountered tremendous amount of bugs both in my code and data. Bugs and errors had been around me all the time, but this time it was somewhat overwhelming. I managed to crunch the numbers at last, but I thought I couldn't put up with this mess any longer. Some actions must be taken. Without a sophisticated guide like the article above, I started to adopt "defensive style" of programming since then. A book titled "The Art of Readable Code" helped me a lot. I deployed meticulous input validations or assertions for every function, renamed a lot of variables and functions for better readability, and extracted many subroutines as reusable functions. Recently, I introduced Git and SourceTree for version control. At the moment, because my co-workers are much more reluctant about these issues, the collaboration practices (8a,b,c) have not been introduced. Actually, as the authors admitted, because all of these practices take some amount of time and effort to introduce, it may be generally hard to persuade your reluctant collaborators to comply them. I think I'm asking your opinions because I still suffer from many bugs despite all my effort on many of these practices. Bug fix may be, or should be, faster than before, but I couldn't really measure the improvement. Moreover, much of my time has been invested on defence, meaning that I haven't actually done much data analysis (offence) these days. Where is the point I should stop at in terms of productivity? I've already deployed: 1a,b,c, 2a, 3a,b,c, 4b,c, 5a,d, 6a,b, 7a,7b I'm about to have a go at: 5b,c Not yet: 2b,c, 4a, 7c, 8a,b,c (I could not really see the advantage of using GNU make (2c) for my purpose. Could anyone tell me how it helps my work with MATLAB?)

    Read the article

  • Books or Articles on Using NUnit to Test Entire Features

    - by INTPnerd
    Are there any books or articles that show you how to use NUnit to test entire features of a program? Is there a name for this type of testing? This is different from the typical use of NUnit for unit testing where you test individual classes. This is similar to acceptance testing except that it is written by the developer to discern that the program does what they interpreted as being what the customer wants the program to do. I don't need it to be readable by non-programmers or to produce a readable specification for non-programmers. The problem I am having is keeping this feature testing code maintainable. I need help in organizing my feature testing code. I also need help organizing the program code to be drivable in this way. I am having a hard time being able to issue commands to the program while still having good code design.

    Read the article

  • Do I have to create a static library to test my application?

    - by Christopher Gateley
    I'm just getting started with TDD and am curious as to what approaches others take to run their tests. For reference, I am using the google testing framework, but I believe the question is applicable to most other testing frameworks and to languages other than C/C++. My general approach so far has been to do either one of three things: Write the majority of the application in a static library, then create two executables. One executable is the application itself, while the other is the test runner with all of the tests. Both link to the static library. Embed the testing code directly into the application itself, and enable or disable the testing code using compiler flags. This is probably the best approach I've used so far, but clutters up the code a bit. Embed the testing code directly into the application itself, and, given certain command-line switches either run the application itself or run the tests embedded in the application. None of these solutions are particularly elegant... How do you do it?

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >