Search Results

Search found 11400 results on 456 pages for 'automated testing'.

Page 131/456 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • How do you create a MANIFEST.MF that's available when you're testing and running from a jar in produ

    - by warvair
    I've spent far too much time trying to figure this out. This should be the simplest thing and everyone who distributes Java applications in jars must have to deal with it. I just want to know the proper way to add versioning to my Java app so that I can access the version information when I'm testing, e.g. debugging in Eclipse and running from a jar. Here's what I have in my build.xml: <target name="jar" depends = "compile"> <property name="version.num" value="1.0.0"/> <buildnumber file="build.num"/> <tstamp> <format property="TODAY" pattern="yyyy-MM-dd HH:mm:ss" /> </tstamp> <manifest file="${build}/META-INF/MANIFEST.MF"> <attribute name="Built-By" value="${user.name}" /> <attribute name="Built-Date" value="${TODAY}" /> <attribute name="Implementation-Title" value="MyApp" /> <attribute name="Implementation-Vendor" value="MyCompany" /> <attribute name="Implementation-Version" value="${version.num}-b${build.number}"/> </manifest> <jar destfile="${build}/myapp.jar" basedir="${build}" excludes="*.jar" /> </target> This creates /META-INF/MANIFEST.MF and I can read the values when I'm debugging in Eclipse thusly: public MyClass() { try { InputStream stream = getClass().getResourceAsStream("/META-INF/MANIFEST.MF"); Manifest manifest = new Manifest(stream); Attributes attributes = manifest.getMainAttributes(); String implementationTitle = attributes.getValue("Implementation-Title"); String implementationVersion = attributes.getValue("Implementation-Version"); String builtDate = attributes.getValue("Built-Date"); String builtBy = attributes.getValue("Built-By"); } catch (IOException e) { logger.error("Couldn't read manifest."); } } But, when I create the jar file, it loads the manifest of another jar (presumably the first jar loaded by the application - in my case, activation.jar). Also, the following code doesn't work either although all the proper values are in the manifest file. Package thisPackage = getClass().getPackage(); String implementationVersion = thisPackage.getImplementationVersion(); Any ideas?

    Read the article

  • BizTalk 2009 - Architecture Decisions

    - by StuartBrierley
    In the first step towards implementing a BizTalk 2009 environment, from development through to live, I put forward a proposal that detailed the options available, as well as the costs and benefits associated with these options, to allow an informed discusion to take place with the business drivers and budget holders of the project.  This ultimately lead to a decision being made to implement an initial BizTalk Server 2009 environment using the Standard Edition of the product. It is my hope that in the long term, as projects require it and allow, we will be looking to implement my ideal recommendation of a multi-server enterprise level environment, but given the differences in cost and the likely initial work load for the environment this was not something that I could fully recommend at this time.  However, it must be noted that this decision was made in full awareness of the limits of the standard edition, and the business drivers of this project were made fully aware of the risks associated with running without the failover capabilities of the enterprise edition. When considering the creation of this new BizTalk Server 2009 environment, I have also recommended the creation of the following pre-production environments:   Usage Environment Development Development of solutions; Unit testing against technical specifications; Initial load testing; Testing of deployment packages;  Visual Studio; BizTalk; SQL; Client PCs/Laptops; Server environment similar to Live implementation; Test Testing of Solutions against business and technical requirements;  BizTalk; SQL; Server environment similar to Live implementation; Pseudo-Live As Live environment to allow testing against Live implementation; Acts as back-up hardware in case of failure of Live environment; BizTalk; SQL; Server environment identical to Live implementation; The creation of these differing environments allows for the separation of the various stages of the development cycle.  The development environment is for use when actively developing a solution, it is a potentially volatile environment whose state at any given time can not be guaranteed.  It allows developers to carry out initial tests in an environment that is similar to the live environment and also provides an area for the testing of deployment packages prior to any release to the test environment. The test environment is intended to be a semi-volatile environment that is similar to the live environment.  It will change periodically through the development of a solution (or solutions) but should be otherwise stable.  It allows for the continued testing of a solution against requirements without the worry that the environment is being actively changed by any ongoing development.  This separation of development and test is crucial in ensuring the quality and control of the tested solution. The pseudo-live environment should be considered to be an almost static environment.  It should mimic the live environment and can act as back up hardware in the case of live failure.  This environment acts as an area to allow for “as live” testing, where the performance and behaviour of the live solutions can be replicated.  There should be relatively few changes to this environment, with software releases limited to “release candidate” level releases prior to going live. Whereas the pseudo-live environment should always mimic the live environment, to save on costs the development and test servers could be implemented on lower specification hardware.  Consideration can also be given to the use of a virtual server environment to further reduce hardware costs in the development and test environments, indeed this virtual approach can also be extended to pseudo-live and live assuming the underlying technology is in place. Although there is no requirement for the development and test server environments to be identical to live, the overriding architecture implemented should be the same as in live and an understanding must be gained of the performance differences to be expected across the different environments.

    Read the article

  • Git: Create a branch from unstagged/uncommited changes on master

    - by knoopx
    Context: I'm working on master adding a simple feature. After a few minutes I realize it was not so simple and it should have been better to work into a new branch. This always happens to me and I have no idea how to switch to another branch and take all these uncommited changes with me leaving the master branch clean. I supposed git stash && git stash branch new_branch would simply accomplish that but this is what I get: ~/test $ git status # On branch master nothing to commit (working directory clean) ~/test $ echo "hello!" > testing ~/test $ git status # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") ~/test $ git stash Saved working directory and index state WIP on master: 4402b8c testing HEAD is now at 4402b8c testing ~/test $ git status # On branch master nothing to commit (working directory clean) ~/test $ git stash branch new_branch Switched to a new branch 'new_branch' # On branch new_branch # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") Dropped refs/stash@{0} (db1b9a3391a82d86c9fdd26dab095ba9b820e35b) ~/test $ git s # On branch new_branch # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") ~/test $ git checkout master M testing Switched to branch 'master' ~/test $ git status # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") Do you know if there is any way of accomplishing this?

    Read the article

  • Maven2 multi-module ejb 3.1 project - deployment error

    - by gerry
    The problem is taht I get the following error qhile deploying my project to Glassfish: java.lang.RuntimeException: Unable to load EJB module. DeploymentContext does not contain any EJB Check archive to ensure correct packaging But, let us start on how the project structure looks like in Maven2... I've build the following scenario: MultiModuleJavaEEProject - parent module - model --- packaged as jar - ejb1 ---- packaged as ebj - ejb2 ---- packaged as ebj - web ---- packaged as war So model, ejb1, ejb2 and web are children/modules of the parent MultiModuleJavaEEProject. _ejb1 depends on model. _ejb2 depends on ejb1. _web depends on ejb2. the pom's look like: _parent: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.dyndns.geraldhuber.testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <packaging>pom</packaging> <version>1.0</version> <name>MultiModuleJavaEEProject</name> <url>http://maven.apache.org</url> <modules> <module>model</module> <module>ejb1</module> <module>ejb2</module> <module>web</module> </modules> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.7</version> <scope>test</scope> </dependency> </dependencies> <build> <pluginManagement> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <version>2.2</version> <configuration> <ejbVersion>3.1</ejbVersion> <jarName>${project.groupId}.${project.artifactId}-${project.version}</jarName> </configuration> </plugin> </plugins> </pluginManagement> </build> </project> _model: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <version>1.0</version> </parent> <artifactId>model</artifactId> <packaging>jar</packaging> <version>1.0</version> <name>model</name> <url>http://maven.apache.org</url> </project> _ejb1: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <version>1.0</version> </parent> <artifactId>ejb1</artifactId> <packaging>ejb</packaging> <version>1.0</version> <name>ejb1</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.ejb</artifactId> <version>3.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>testing</groupId> <artifactId>model</artifactId> <version>1.0</version> </dependency> </dependencies> </project> _ejb2: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <version>1.0</version> </parent> <artifactId>ejb2</artifactId> <packaging>ejb</packaging> <version>1.0</version> <name>ejb2</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.ejb</artifactId> <version>3.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>testing</groupId> <artifactId>ejb1</artifactId> <version>1.0</version> </dependency> </dependencies> </project> _web: <?xml version="1.0" encoding="UTF-8"?> <project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <modelVersion>4.0.0</modelVersion> <parent> <artifactId>MultiModuleJavaEEProject</artifactId> <groupId>testing</groupId> <version>1.0</version> </parent> <groupId>testing</groupId> <artifactId>web</artifactId> <version>1.0</version> <packaging>war</packaging> <name>web Maven Webapp</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.4</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.ejb</artifactId> <version>3.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>testing</groupId> <artifactId>ejb2</artifactId> <version>1.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.0</version> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> </manifest> </archive> </configuration> </plugin> </plugins> <finalName>web</finalName> </build> </project> And the model is just a simple Pojo: package testing.model; public class Data { private String data; public String getData() { return data; } public void setData(String data) { this.data = data; } } And the ejb1 contains only one STATELESS ejb. package testing.ejb1; import javax.ejb.Stateless; import testing.model.Data; @Stateless public class DataService { private Data data; public DataService(){ data = new Data(); data.setData("Hello World!"); } public String getDataText(){ return data.getData(); } } As well as the ejb2 is only a stateless ejb: package testing.ejb2; import javax.ejb.EJB; import javax.ejb.Stateless; import testing.ejb1.DataService; @Stateless public class Service { @EJB DataService service; public Service(){ } public String getText(){ return service.getDataText(); } } And the web module contains only a Servlet: package testing.web; import java.io.IOException; import java.io.PrintWriter; import javax.ejb.EJB; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import testing.ejb2.Service; public class SimpleServlet extends HttpServlet { @EJB Service service; public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter out = response.getWriter(); out.println( "SimpleServlet Executed" ); out.println( "Text: "+service.getText() ); out.flush(); out.close(); } } And the web.xml file in the web module looks like: <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd" > <web-app> <display-name>Archetype Created Web Application</display-name> <servlet> <servlet-name>simple</servlet-name> <servlet-class>testing.web.SimpleServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>simple</servlet-name> <url-pattern>/simple</url-pattern> </servlet-mapping> </web-app> So no further files are set up by me. There is no ejb-jar.xml in any ejb files, because I'm using EJB 3.1. So I think ejb-jar.xml descriptors are optional. I this right? But the problem is, the already mentioned error: java.lang.RuntimeException: Unable to load EJB module. DeploymentContext does not contain any EJB Check archive to ensure correct packaging Can anybody help?

    Read the article

  • Syntax error on running a batch file to replace files

    - by Ralph
    I have a batch file intended to replace all instances of tracking.js within a folder/sub folders. FOR /R "D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\Version1.2\" %%I IN (tracking.js*) DO COPY /Y "D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\tracking.js" %%~fI When this is run I get the following syntax error C:COPY /Y "D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\track ing.js" D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\Version1.2 \SHAPERS_COMBINED\Smarter Communications\WhatisInfluencing\script\Tracking.js The syntax of the command is incorrect. Ideas please?

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • How to make a general profile for PHPUnit testing in WebIDE?

    - by Ondrej Slinták
    I'm playing a bit with beta version of PHP Storm (PHP version of WebIDE) and its integration of PHPUnit. I know how to set a profile to run tests in particular file, directory or class. Problem is, I'd like to create some profile where Run button would run tests in currently opened file. Any idea if there's a way to do it? Or perhaps it isn't implemented in beta version yet?

    Read the article

  • Taking "do the simplest thing that could possible work" too far in TDD: testing for a file-name kno

    - by Support - multilanguage SO
    For TDD you have to Create a test that fail Do the simplest thing that could possible work to pass the test Add more variants of the test and repeat Refactor when a pattern emerge With this approach you're supposing to cover all the cases ( that comes to my mind at least) but I'm wonder if am I being too strict here and if it is possible to "think ahead" some scenarios instead of simple discover them. For instance, I'm processing a file and if it doesn't conform to a certain format I am to throw an InvalidFormatException So my first test was: @Test void testFormat(){ // empty doesn't do anything nor throw anything processor.validate("empty.txt"); try { processor.validate("invalid.txt"); assert false: "Should have thrown InvalidFormatException"; } catch( InvalidFormatException ife ) { assert "Invalid format".equals( ife.getMessage() ); } } I run it and it fails because it doesn't throw an exception. So the next thing that comes to my mind is: "Do the simplest thing that could possible work", so I : public void validate( String fileName ) throws InvalidFormatException { if(fileName.equals("invalid.txt") { throw new InvalidFormatException("Invalid format"); } } Doh!! ( although the real code is a bit more complicated, I found my self doing something like this several times ) I know that I have to eventually add another file name and other test that would make this approach impractical and that would force me to refactor to something that makes sense ( which if I understood correctly is the point of TDD, to discover the patterns the usage unveils ) but: Q: am I taking too literal the "Do the simplest thing..." stuff?

    Read the article

  • is (StringComparison.)Ordinal the same as InvariantCulture for testing equality?

    - by Tim Lovell-Smith
    From what I read so far, it sounds like these StringComparison types are meant to differ in how they do sorting of strings, if so, does that mean that it does'nt matter which StringComparison you use when doing an equality comparison? string.Equals(a, b, StringComparison....) Extra credit: does it make a difference to the answer if we compare OrdinalIgnoreCase and InvariantCultureIgnoreCase? What is the answer then? Please provide supporting argument and/or references.

    Read the article

  • iPhone app memory leak with UIImage animation? Problem testing on device

    - by user157733
    I have an animation which works fine in the simulator but crashes on the device. I am getting the following error... Program received signal: “0”. The Debugger has exited due to signal 10 (SIGBUS) A bit of investigating suggests that the UIImages are not getting released and I have a memory leak. I am new to this so can someone tell me if this is the likely cause? If you could also tell me how to solve it then that would be amazing. The images are 480px x 480px and about 25kb each. My code is below... NSArray *rainImages = [NSArray arrayWithObjects: [UIImage imageNamed:@"rain-loop0001.png"], [UIImage imageNamed:@"rain-loop0002.png"], [UIImage imageNamed:@"rain-loop0003.png"], [UIImage imageNamed:@"rain-loop0004.png"], [UIImage imageNamed:@"rain-loop0005.png"], [UIImage imageNamed:@"rain-loop0006.png"], //more looping images [UIImage imageNamed:@"rain-loop0045.png"], [UIImage imageNamed:@"rain-loop0046.png"], [UIImage imageNamed:@"rain-loop0047.png"], [UIImage imageNamed:@"rain-loop0048.png"], [UIImage imageNamed:@"rain-loop0049.png"], [UIImage imageNamed:@"rain-loop0050.png"], nil]; rainImage.animationImages = rainImages; rainImage.animationDuration = 4.15/2; rainImage.animationRepeatCount = 0; [rainImage startAnimating]; [rainImage release]; Thanks

    Read the article

  • Duck type testing with C# 4 for dynamic objects.

    - by Tracker1
    I'm wanting to have a simple duck typing example in C# using dynamic objects. It would seem to me, that a dynamic object should have HasValue/HasProperty/HasMethod methods with a single string parameter for the name of the value, property, or method you are looking for before trying to run against it. I'm trying to avoid try/catch blocks, and deeper reflection if possible. It just seems to be a common practice for duck typing in dynamic languages (JS, Ruby, Python etc.) that is to test for a property/method before trying to use it, then falling back to a default, or throwing a controlled exception. The example below is basically what I want to accomplish. If the methods described above don't exist, does anyone have premade extension methods for dynamic that will do this? Example: In JavaScript I can test for a method on an object fairly easily. //JavaScript function quack(duck) { if (duck && typeof duck.quack === "function") { return duck.quack(); } return null; //nothing to return, not a duck } How would I do the same in C#? //C# 4 dynamic Quack(dynamic duck) { //how do I test that the duck is not null, //and has a quack method? //if it doesn't quack, return null }

    Read the article

  • What's the best way to do cross browser testing?

    - by Doug
    What's the best way for me to check if my website is compatible in IE7,8, Safari, FF, and Chrome without having to install each and everyone? I mainly want to check the CSS, HTML, and JavaScript. Update I put a bounty in hopes there is a more practical solution for someone like myself. I am using Windows 7 Home Premium x64. Update2 I don't mind installing these browsers now, but I can't even if I wanted to. Windows 7 doesn't allow me to install IE7.

    Read the article

  • What is a convenient base for a bignum library & primality testing algorithm?

    - by nn
    Hi, I am to program the Solovay-Strassen primality test presented in the original paper on RSA. Additionally I will need to write a small bignum library, and so when searching for a convenient representation for bignum I came across this [specification][1]: struct { int sign; int size; int *tab; } bignum; I will also be writing a multiplication routine using the Karatsuba method. So, for my question: What base would be convenient to store integer data in the bignum struct? Note: I am not allowed to use third party or built-in implementations for bignum such as GMP. Thank you.

    Read the article

  • Is there a sample set of web log data available for testing analysis against?

    - by Peter
    Sorry if this isn't strictly speaking a programming question, but I figure my best chance of success would be to ask here. I'm developing some web log file analysis algorithms, but to date I only have access to a fairly small amount of web log data to process. One algorithm I want to use makes some assumptions about 'the shape' of typical web log data, and so I'd like to test it against a larger 'exemplar' - perhaps the logs of a busy site with a good distribution of traffic from different sources etc. Is there a set of such data available somewhere? Thanks for any help.

    Read the article

  • How to simulate a dial-up connection for testing purposes?

    - by mawg
    I have to code a server app where clients open a TCP/IP socket, send some data and close the connection. The data packets are small < 100 bytes, however there is talk of having them batch their transactions and send multiple packets. How can I best simulate a dial-up ut connection (using Delphy & Indy components, just FYI)? Is it as simple as open connection wait a while (what is the definition of "a while"?) close connection

    Read the article

  • Whats the best way/event to use when testing if the textbox text has finished a change of text

    - by Spooky2010
    using winforms, c#, vs 2008 So i have textbox1, textbox2 and textbox3 on a winforms. Textbox3.text = textbox1.text + textbox2.text. I need textbox3 to be updated whenever the contents of textbox1 and textbox2 have been changed either manually or programmatic. The problem is if i use textbox textchanged event it keeps firing as one types in the textbox. I need a way to call my method to fill textbox3 after either tb1 or tb2 have been FINISHED changing programmaticly or via key entry, and not fire everytime a letter of text is entered. How can I have TextBox3 update only when tb1 or tb2 have finished changing?

    Read the article

  • Using Kal calendar without doing the initialization (and so on) in the AppDelegate

    - by testing
    I'm using the Kal calendar. For the code shown below I'm referring to the Holiday example. In this example the allocation and initialization of Kal is done in the applicationDidFinishLaunching in the AppDelegate. The UITableViewDelegate protocol (e.g. didSelectRowAtIndexPath) is also positioned in the AppDelegate class. The AppDelegate: #import "HolidayAppDelegate.h" #import "HolidaySqliteDataSource.h" #import "HolidaysDetailViewController.h" ## Heading ###import "Kal.h" @implementation HolidayAppDelegate @synthesize window; - (void)applicationDidFinishLaunching:(UIApplication *)application { kal = [[KalViewController alloc] init]; kal.navigationItem.rightBarButtonItem = [[[UIBarButtonItem alloc] initWithTitle:@"Today" style:UIBarButtonItemStyleBordered target:self action:@selector(showAndSelectToday)] autorelease]; kal.delegate = self; dataSource = [[HolidaySqliteDataSource alloc] init]; kal.dataSource = dataSource; // Setup the navigation stack and display it. navController = [[UINavigationController alloc] initWithRootViewController:kal]; [window addSubview:navController.view]; [window makeKeyAndVisible]; } // Action handler for the navigation bar's right bar button item. - (void)showAndSelectToday { [kal showAndSelectDate:[NSDate date]]; } #pragma mark UITableViewDelegate protocol conformance // Display a details screen for the selected holiday/row. - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { Holiday *holiday = [dataSource holidayAtIndexPath:indexPath]; HolidaysDetailViewController *vc = [[[HolidaysDetailViewController alloc] initWithHoliday:holiday] autorelease]; [navController pushViewController:vc animated:YES]; } #pragma mark - - (void)dealloc { [kal release]; [dataSource release]; [window release]; [navController release]; [super dealloc]; } @end I don't want to put this into the AppDelegate, because there could be some overlapping code with other views. It should be a separate "component" which I can call and put on the stack. In my navigation based project I have a main view, the RootViewController. From there I want to push the Kal view on the stack. Currently I'm pushing an additional ViewController on the stack. In the viewWillAppear method from this ViewController I do the things shown in the code above. The following problems appear: Navigation back has to be done two times (one for the Kal calendar, one for my created view) Navigation to my main view is not possible anymore In the moment I don't know where to put this code. So the question is where to put the methods for allocation/initialization as well as the methods for the UITableViewDelegate protocol.

    Read the article

  • Unit testing authorization in a Pylons app fails; cookies aren't been correctly set or recorded

    - by Ian Stevens
    I'm having an issue running unit tests for authorization in a Pylons app. It appears as though certain cookies set in the test case may not be correctly written or parsed. Cookies work fine when hitting the app with a browser. Here is my test case inside a paste-generated TestController: def test_good_login(self): r = self.app.post('/dologin', params={'login': self.user['username'], 'password': self.password}) r = r.follow() # Should only be one redirect to root assert 'http://localhost/' == r.request.url assert 'Dashboard' in r This is supposed to test that a login of an existing account forwards the user to the dashboard page. Instead, what happens is that the user is redirected back to the login. The first POST works, sets the user in the session and returns cookies. Although those cookies are sent in the follow request, they don't seem to be correctly parsed. I start by setting a breakpoint at the beginning of the above method and see what the login response returns: > nosetests --pdb --pdb-failure -s foo.tests.functional.test_account:TestMainController.test_good_login Running setup_config() from foo.websetup > /Users/istevens/dev/foo/foo/tests/functional/test_account.py(33)test_good_login() -> r = self.app.post('/dologin', params={'login': self.user['username'], 'password': self.password}) (Pdb) n > /Users/istevens/dev/foo/foo/tests/functional/test_account.py(34)test_good_login() -> r = r.follow() # Should only be one redirect to root (Pdb) p r.cookies_set {'auth_tkt': '"4c898eb72f7ad38551eb11e1936303374bd871934bd871833d19ad8a79000000!"'} (Pdb) p r.request.environ['REMOTE_USER'] '4bd871833d19ad8a79000000' (Pdb) p r.headers['Location'] 'http://localhost/?__logins=0' A session appears to be created and a cookie sent back. The browser is redirected to the root, not the login, which also indicates a successful login. If I step past the follow(), I get: > /Users/istevens/dev/foo/foo/tests/functional/test_account.py(35)test_good_login() -> assert 'http://localhost/' == r.request.url (Pdb) p r.request.headers {'Host': 'localhost:80', 'Cookie': 'auth_tkt=""\\"4c898eb72f7ad38551eb11e1936303374bd871934bd871833d19ad8a79000000!\\"""; '} (Pdb) p r.request.environ['REMOTE_USER'] *** KeyError: KeyError('REMOTE_USER',) (Pdb) p r.request.environ['HTTP_COOKIE'] 'auth_tkt=""\\"4c898eb72f7ad38551eb11e1936303374bd871934bd871833d19ad8a79000000!\\"""; ' (Pdb) p r.request.cookies {'auth_tkt': ''} (Pdb) p r <302 Found text/html location: http://localhost/login?__logins=1&came_from=http%3A%2F%2Flocalhost%2F body='302 Found...y. '/149> This indicates to me that the cookie was passed in on the request, although with dubious escaping. The environ appears to be without the session created on the prior request. The cookie has been copied to the environ from the headers, but the cookies in the request seems incorrectly set. Lastly, the user is redirected to the login page, indicating that the user isn't logged in. Authorization in the app is done via repoze.who and repoze.who.plugins.ldap with repoze.who_friendlyform performing the challenge. I'm using the stock tests.TestController created by paste: class TestController(TestCase): def __init__(self, *args, **kwargs): if pylons.test.pylonsapp: wsgiapp = pylons.test.pylonsapp else: wsgiapp = loadapp('config:%s' % config['__file__']) self.app = TestApp(wsgiapp) url._push_object(URLGenerator(config['routes.map'], environ)) TestCase.__init__(self, *args, **kwargs) That's a webtest.TestApp, by the way. The encoding of the cookie is done in webtest.TestApp using Cookie: >>> from Cookie import _quote >>> _quote('"84533cf9f661f97239208fb844a09a6d4bd8552d4bd8550c3d19ad8339000000!"') '"\\"84533cf9f661f97239208fb844a09a6d4bd8552d4bd8550c3d19ad8339000000!\\""' I trust that that's correct. My guess is that something on the response side is incorrectly parsing the cookie data into cookies in the server-side request. But what? Any ideas?

    Read the article

  • Where can I find project repositories with continuous testing?

    - by Jenny Smith
    I am interested in studying some test logs from different projects, in order to build and test an application for school. I need to analyze the parts of the code which are tested, the bugs which appeared in those parts and eventually how they were resolved. But for this I need some repositories from different (open source) projects. Can someone please help me with ideas or links or any kind of test logs which might be useful? I really need some resources, so any help is appreciated.

    Read the article

  • How to do some preformance testing in asp.net mvc?

    - by chobo2
    Hi I am using asp.net mvc 2.0 and I want to test how long it takes to do some of my code. In one senario I do this load xml file up. validate xml file and deserailze. validate all rows in the xml file with more advanced validation that cannot be done in the schema validation. then I do a bulk insert. I want to know how long steps 1 to 3 take and how long step 4 takes. I tried to do like DateTime.UtcNow in areas and subtract them but it told me it took like 3 seconds but I know that is not right as steps 1 to 4 take 2mins to do.

    Read the article

  • ms-access: missing operator in query expression

    - by every_answer_gets_a_point
    i have this sql statement in access: SELECT * FROM (SELECT [Occurrence Number], [1 0 Preanalytical (Before Testing)], NULL, NULL,NULL FROM [Lab Occurrence Form] WHERE NOT ([1 0 Preanalytical (Before Testing)] IS NULL) UNION SELECT [Occurrence Number], NULL, [2 0 Analytical (Testing Phase)], NULL,NULL FROM [Lab Occurrence Form] WHERE NOT ([2 0 Analytical (Testing Phase)] IS NULL) UNION SELECT [Occurrence Number], NULL, NULL, [3 0 Postanalytical ( After Testing)],NULL FROM [Lab Occurrence Form] WHERE NOT ([3 0 Postanalytical ( After Testing)] IS NULL) UNION SELECT [Occurrence Number], NULL, NULL,NULL [4 0 Other] FROM [Lab Occurrence Form] WHERE NOT ([4 0 Other] IS NULL) ) AS mySubQuery ORDER BY mySubQuery.[Occurrence Number]; everything was fine until i added the last line: SELECT [Occurrence Number], NULL, NULL,NULL [4 0 Other] FROM [Lab Occurrence Form] WHERE NOT ([4 0 Other] IS NULL) i get this error: syntax error (missing operator) in query expression 'NULL [4 0 Other]' anyone have any clues why i am getting this error?

    Read the article

  • How to do 404 link testing through selenium rc for complete website?

    - by user1726460
    How can i verify a complete website's link(mostly links that are redirecting to 404 page) by using selenium Rc. Previously i tried to do this thong by using xenu and web link validator but in there results most of the links are showing 500 internal serevr error.and for the pages they are showing 500 internal server error the pages actuallt don't exists in the web site. So what is the concept if we can crawl through the website using selenium rc.?

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >