Search Results

Search found 6281 results on 252 pages for 'automated tests'.

Page 8/252 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Automated BizTalk documentation

    - by Kevin Shyr
    Yay, this should help us going through old legacy app with no doc, at least some help. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} http://biztalkdocumenter.codeplex.com/

    Read the article

  • Collision detection doesn't work for automated elements in XNA 4.0

    - by NDraskovic
    I have a really weird problem. I made a 3D simulator of an "assembly line" as a part of a college project. Among other things it needs to be able to detect when a box object passes in front of sensor. I tried to solve this by making a model of a laser and checking if the box collides with it. I had some problems with BoundingSpheres of models meshes so I simply create a BoundingSphere and place it in the same place as the model. I organized them into a list of BoundingSpheres called "spheres" and for each model I create one BoundingSphere. All models except the box are static, so the box object has its own BoundingSphere (not a member of the "spheres" list). I also implemented a picking algorithm that I use to start the movement. This is the code that checks for collision: if (spheres.Count != 0) { for (int i = 1; i < spheres.Count; i++) { if (spheres[i].Intersects(PickingRay) != null && Microsoft.Xna.Framework.Input.ButtonState.Pressed == Mouse.GetState().LeftButton) { start = true; break; } if (BoxSphere.Intersects(spheres[i]) && start) { MoveBox(0, false);//The MoveBox function receives the direction (0) and a bool value that dictates whether the box should move or not (false means stop) start = false; break; } if (start /*&& Microsoft.Xna.Framework.Input.ButtonState.Pressed == Mouse.GetState().LeftButton*/ && !BoxSphere.Intersects(spheres[i])) { MoveBox(0, true); break; } } The problem is this: When I use the mouse to move the box (the commented part in the third if condition) the collision works fine (I have another part of code that I removed to simplify my question - it calculates the "address" of the box, and by that number I know that the collision is correct). But when I comment it (like in this example) the box just passes trough the lasers and does not detect the collision (the idea is that the box stops at each laser and the user passes it forth by clicking on the appropriate "switch"). Can you see the problem? Please help, and if you need more informations I will try to give them. Thanks

    Read the article

  • Cross-Platform Automated Mobile Application UI Testing

    - by thetaspark
    My dissertation is about developing a tool for testing mobile applications from the GUI. Primary device is Android, but it should support Blackberry or iOs etc. I found some frameworks using Google, e.g MonkeyTalk. I am not so sure, but what I want to develop might be a mini MonkeyTalk, with minimal functionality, focused on the GUI of the application(s) to be tested. My questions: What framework(s)? I am good with Java. Can I use the xUnit family for this, and how? What should I be reading/studying? Any suggestions, links for tutorials, documentations, howtos, etc., would be very helpful. Thanks in advance.

    Read the article

  • How to get started with Automated Installer in Oracle Solaris 11

    - by unixman
    Hey all, I am pleased to make this year's Oracle OpenWorld Hands-on Lab exercises available for your review and use. These steps are written to demonstrate Oracle Solaris 11 Deployment technologies and are written with intentions of being usable and applicable well-beyond the 1 hour time-slot-on-a-laptop offered at OpenWorld.   If you're at OpenWorld and would like to join the session in-person, it is at 3:30 at the Marriott Marquis,  Yerba Buena 14 conference room. Please let me know what you think of these instructions!

    Read the article

  • Coping with build order requirements in automated builds

    - by Derecho
    I have three Scala packages being built as separate sbt projects in separate repos with a dependency graph like this: M---->D ^ ^ | | +--+--+ ^ | S S is a service. M is a set of message classes shared between S and another service. D is a DAL used by S and the other service, and some of its model appears in the shared messages. If I make a breaking change to all three, and push them up to my Git repo, a build of S will be kicked off in Jenkins. The build will only be successful if, when S is pushed, M and D have already been pushed. Otherwise, Jenkins will find it doesn't have the right dependent package versions available. Even pushing them simultaneously wouldn't be enough -- the dependencies would have to be built and published before the dependent job was even started. Making the jobs dependent in Jenkins isn't enough, because that would just cause the previous version to be built, resulting in an artifact that doesn't have the needed version. Is there a way to set things up so that I don't have to remember to push things in the right order? The only way I can see it working is if there was a way that a build could go into a pending state if its dependencies weren't available yet. I feel like there's a simple solution I'm missing. Surely people deal with this a lot?

    Read the article

  • On The Question Of Automated Website Testing

    Almost all webmasters (or at least "quite a lot of webmasters") have heard about the significance of website testing before the production. Having developed a website or a web application, most authors want to publish it immediately and see how people like it. If they ignore prior website testing, the project may appear unprepared for real Internet activity and reveal awful performance.

    Read the article

  • iptables firewall to protect against automated entries

    - by Kenyana
    I am getting unusually large calls on my app. I have implemented CSRF Check over ajax and its working but am still getting so many calls. My guess is that someone has a script that is 'logged' in and making all these calls. Could someone please share a good iptables script for blocking ip's that run 10 calls to /controler/action in a second. I am using `/sbin/iptables -A INPUT -p tcp --syn --dport $port -m connlimit --connlimit-above N -j REJECT --reject-with tcp-reset save the changes see iptables-save man page, the following is redhat and friends specific command service iptables save` That is from cyberciti

    Read the article

  • Junit: splitting integration test and Unit tests.

    - by jeff porter
    Hello all, I've inherited a load of Junit test, but these tests (apart from most not working) are a mixture of actual unit test and integration tests (requiring external systems, db etc). So I'm trying to think of a way to actually separate them out, so that I can run the unit test nice and quickly and the integration tests after that. The options are.. 1: Split them into separate directories. 2: Move to Junit4 and annotate the classes to separate them. 3: Use a file naming convention to tell what a class is , i.e. AdapterATest and AdapterAIntergrationTest. 3 has the issue that Eclipse has the option to "Run all tests in the selected project/package or folder". So it would make it very hard to just run the integration tests. 2: runs the risk that developers might start writing integration tests in unit test classes and it just gets messy. 1: Seems like the neatest solution, but my gut says there must be a better solution out there. So that is my question, how do you lot break apart integration tests and proper unit tests?

    Read the article

  • Doing unit and integration tests with the Web API HttpClient

    - by cibrax
    One of the nice things about the new HttpClient in System.Net.Http is the support for mocking responses or handling requests in a http server hosted in-memory. While the first option is useful for scenarios in which we want to test our client code in isolation (unit tests for example), the second one enables more complete integration testing scenarios that could include some more components in the stack such as model binders or message handlers for example.   The HttpClient can receive a HttpMessageHandler as argument in one of its constructors. public class HttpClient : HttpMessageInvoker { public HttpClient(); public HttpClient(HttpMessageHandler handler); public HttpClient(HttpMessageHandler handler, bool disposeHandler); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } For the first scenario, you can create a new HttpMessageHandler that fakes the response, which you can use in your unit test. The only requirement is that you somehow inject an HttpClient with this custom handler in the client code. public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In an unit test, you can do something like this. var fakeResponse = new HttpResponse(); var fakeHandler = new FakeHttpMessageHandler(fakeResponse); var httpClient = new HttpClient(fakeHandler); var customerService = new CustomerService(httpClient); // Do something // Asserts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } CustomerService in this case is the class under test, and the one that receives an HttpClient initialized with our fake handler. For the second scenario in integration tests, there is a In-Memory host “System.Web.Http.HttpServer” that also derives from HttpMessageHandler and you can use with a HttpClient instance in your test. This has been discussed already in these two great posts from Pedro and Filip. 

    Read the article

  • Writing the tests for FluentPath

    Writing the tests for FluentPath is a challenge. The library is a wrapper around a legacy API (System.IO) that wasnt designed to be easily testable. If it were more testable, the sensible testing methodology would be to tell System.IO to act against a mock file system, which would enable me to verify that my code is doing the expected file system operations without having to manipulate the actual, physical file system: what we are testing here is FluentPath, not System.IO. Unfortunately, that...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Jasmine BDD vs Integration Tests

    - by lfender6445
    Lets say I need to write a test for the front end. A user visits buysomething.com, saves something to their wishlist, and a saved item count is updated. DOM gets manipulated. In my heart I feel this is better suited as an integration test - but my team is currently using jasmine to load fixtures and test such interactions. This leads to extremely brittle tests as they are reliant on a static fixture instead of the actual markup. Are we misusing jasmine here?

    Read the article

  • Learning a new language using broken unit tests

    - by Brian MacKay
    I was listening to a dot net rocks the other day where they mentioned, almost in passing, a really intriguing tool for learning new languages -- I think they were specifically talking about F#. It's a solution you open up and there are a bunch of broken unit tests. Fixing them walks you through the steps of learning the language. I want to check it out, but I was driving in my car and I have no idea what the name of the project is or which dot net rocks episode it was. Google hasn't helped much. Any idea?

    Read the article

  • mocha testing for the lazies, single key-press for all possible tests

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. Is there something that does this or anything like this automatically? Like reads all the files and asks me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Dynamic tests with mstest and T4

    - by Victor Hurdugaci
    If you used mstest and NUnit you might be aware of the fact that the former doesn't support dynamic, data driven test cases. For example, the following scenario cannot be achieved with the out-of-box mstest: given a dataset, create distinct test cases for each entry in it, using a predefined generic test case. The best result that can be achieved using mstest is a single testcase that will iterate through the dataset. There is one disadvantage: if the test fails for one entry in the dataset, the whole test case fails. So, in order to overcome the previously mentioned limitation, I decided to create a text template that will generate the test cases for me. As an example, I will write some tests for an integer multiplication function that has 2 bugs in it: Read more >> [Cross post from victorhurdugaci.com]

    Read the article

  • Software, script or a tool to automate managing which tests to run

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. I have to update the batch file everytime a new file is added or changed. Is there a software, script or a tool, that does this automatically, or makes it easier for me to do so? I basically need it to be aware of and ask me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Tip #15: How To Debug Unit Tests During Maven Builds

    - by ByronNevins
    It must be really really hard to step through unit tests in a debugger during a maven build.  Right? Wrong! Here is how i do it: 1) Set up these environmental variables: MAVEN_OPTS=-Xmx1024m -Xms256m -XX:MaxPermSize=512mMAVEN_OPTS_DEBUG=-Xmx1024m -Xms256m -XX:MaxPermSize=512m  -Xdebug (no line break here!!)  -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=9999MAVEN_OPTS_REG=-Xmx1024m -Xms256m -XX:MaxPermSize=512m 2) create 2 scripts or aliases like so:  maveny.bat: set MAVEN_OPTS=%MAVEN_OPTS_DEBUG% mavenn.bat: set MAVEN_OPTS=%MAVEN_OPTS_REG%    To debug do this: run maveny.bat run mvn install attach your debugger to port 9999 (set breakpoints of course) When maven gets to the unit test phase it will hit your breakpoint and wait for you. When done debugging simply run mavenn.bat Notes If it takes a while to do the build then you don't really need to set the suspend=y flag. If you set the suspend=n flag then you can just leave it -- but only one maven build can run at a time because of the debug port conflict.

    Read the article

  • Google Analytics Content Experiments for non-simultaneous tests

    - by mnort9
    I really like how Google Analytics displays the results of content experiments. However, it seems the tool only works for simultaneous tests. I'd like to use the tool without implementing the page variation code into my site. For example, I want to test copy on an ecommerece category page. The original page variation would be the current page for the past 2500 visits. After making the copy changes, the new variation would be for the next 2500 visits. I realize I can simply record the metrics before and after each variation, but I'd like to take advantage of Google's presentation of the experiment. Is it possible to use the Content Experiments in this way?

    Read the article

  • New WebKit tests

    I have updated the WebKit comparison table with data from Safari 5, Chrome 5, and Android 2.1. Improvements throughout!The top five WebKit browsers according to these tests are now: Chrome 5 Safari 5 Safari 4 Samsung WebKit (on bada) Android 2.1Interesting findings: Chrome and Android now support localStorage (Safari already did). Chrome and Android now support geolocation. Safari does in theory, but it doesn’t give the actual coordinates, making the whole exercise a bit pointless. Chrome and Android...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What does well written, readable tests look like?

    - by Industrial
    Doing unit testing for the first time at a large scale, I find myself writing a lot of repetitive unit tests for my business logic. Sure, to create complete test suites I need to test all possibilities but readability feels compromised doing what I do - as shown in the psuedocode below. How would a well written, readable test suit look like? describe "UserEntity" -> it "valid name validates" ... it "invalid name doesnt validate" ... it "valid list of followers validate" ..

    Read the article

  • Writing the tests for FluentPath

    Writing the tests for FluentPath is a challenge. The library is a wrapper around a legacy API (System.IO) that wasnt designed to be easily testable. If it were more testable, the sensible testing methodology would be to tell System.IO to act against a mock file system, which would enable me to verify that my code is doing the expected file system operations without having to manipulate the actual, physical file system: what we are testing here is FluentPath, not System.IO. Unfortunately, that...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Automated testing of a website for IE7 javascript errors?

    - by Andreas Bonini
    This week I decided to add a new element to a javascript array by copying a similar one from a previous line; unfortunately I forgot to remove the comma so the end result was something like var a = [1, 2, 3,]. The code went live late Friday afternoon just before everyone left for the week-end, and it completely broke everything in Internet Explorer 7 (and lower I assume) since it's such a great browser. Since there was no one to read emails (week-end) it went unnoticed for quite a while, and I really don't want something like this to happen again (especially in my code).. This is not the first of weird IE7 problems; I was wondering if there was a way to automatically test key pages looking for javascript or css errors, or really anything that IE8 would output in its new console in development tools. If there isn't, what do you usually do? You test the website after every change with all the browsers you support? (Something I'll do from now, at least for IE, if there is no way to run automated tests)

    Read the article

  • TeamCity sends inadequate responses after Selenium tests

    - by Dmitriy Sukharev
    I have a TeamCity 7.0.2 at CentOS 6.2 server without X Server. I've installed x11-fonts*, xvfb, firefox, xauth, extracted env. variable DISPLAY=localhost:1, and started xvfb. After that I could start Selenium tests using maven. Tests are executed, but there's an issue with TeamCity. Usually TeamCity starts hehaves absolutely inadequate (it confuses images at the page, sends xml or strange text ampersants and numbers in responses and is a bit slower), also tests are executed 4 times slower (1h 15m) at server than at tester Windows 7-based machine (25m). It worth to notice that tests launch two Jetty servers for tested application (one for REST-services application and another for client). In TeamCity I set JVM command line parameters: -Xms256m -Xmx1224m -XX:MaxPermSize=320m, and Additional Maven command line parameters ends with "-DMAVEN_OPTS=-Xmx1024m" (without quotes). Also both web-services and TeamCity uses the same Oracle server (but different Oracle users). Finally TeamCity and its build agent is at the same server. Server has only 4GB of RAM, but during testing there're 400MB of RAM and 1.2GB of swap. TeamCity and Firefox uses about 65% of CPU during testing. There's no firefox process after end of testing. My knowledge about Selenium is weak. I only know that we use 2.20.0 version of selenium-java maven dependency. Please help me to determine why TeamCity sends wrong responces after Selenium tests. I've tried to give you all information I have, but feel free to ask me for more information.

    Read the article

  • Ethernet run tests green but won't connect

    - by Simon Gillbee
    I have a single ethernet run at home that I just added. I have a cable tester that tests for pin/pair crossover or miswired pins. The entire line tests green (all 4 LEDs light up green on the tester) but I can't get any PC to connect through the link. No link light on the ethernet connection. Any simple tests/fixes, or do I rip out the wall sockets and do it again?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >