Search Results

Search found 8582 results on 344 pages for 'integration tests'.

Page 77/344 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Problem with SVN filename encoding on Mac OS X

    - by Albert
    I have some filename with some Unicode character in it. All filenames on Mac OS X are UTF8 encoded. Also $LANG is set to en_US.UTF-8. However, it seems svn has some problems with that: az@ip212 1054 (Integration) %ls Abbildungen Verbesserungsvorschläge_Applets.odt AllgemeineAnmerkungen.rtf Verbesserungsvorschläge_Applets.rtf Geogebra Vorlagen Texte az@ip212 1055 (Integration) %svn ls Abbildungen/ AllgemeineAnmerkungen.rtf Geogebra/ Texte/ Verbesserungsvorschläge_Applets.rtf Verbesserungsvorschläge_Applets.odt Vorlagen/ az@ip212 1056 (Integration) %svn del Verb*.odt svn: Use --force to override this restriction svn: 'Verbesserungsvorschläge_Applets.odt' is not under version control az@ip212 1057 (Integration) %svn status ? Verbesserungsvorschläge_Applets.odt ! Verbesserungsvorschläge_Applets.odt az@ip212 1058 (Integration) % As you can see, svn del does not recognize the filename. And even svn status gets confused about it. How can I fix this? I also tried with LC_CTYPE=$LANG LC_ALL=$LANG LC=$LANG but no change.

    Read the article

  • Benchmarks Using Oracle Solaris 11

    - by Brian
    The following is a list of links to recent benchmarks which used Oracle Solaris 11. Oracle TimesTen In-Memory Database Performance on SPARC T4-2 World Record Performance on PeopleSoft Enterprise Financials Benchmark on SPARC T4-2 SPARC T4 Servers Running Oracle Solaris 11 and Oracle RAC Deliver World Record on PeopleSoft HRMS 9.1 SPEC CPU2006 Results on Oracle's Sun x86 Servers SPARC T4-4 Beats 8-CPU IBM POWER7 on TPC-H @3000GB Benchmark SPARC T4-2 Delivers World Record SPECjvm2008 Result with Oracle Solaris 11 SPARC T4-2 Server Beats Intel (Westmere AES-NI) on ZFS Encryption Tests SPARC T4 Processor Beats Intel (Westmere AES-NI) on AES Encryption Tests SPARC T4 Processor Outperforms IBM POWER7 and Intel (Westmere AES-NI) on OpenSSL AES Encryption Test SPARC T4-1 Server Outperforms Intel (Westmere AES-NI) on IPsec Encryption Tests SPARC T4-2 Server Beats Intel (Westmere AES-NI) on SSL Network Tests SPARC T4-2 Server Beats Intel (Westmere AES-NI) on Oracle Database Tablespace Encryption Queries

    Read the article

  • Monitoring C++ applications

    - by Scott A
    We're implementing a new centralized monitoring solution (Zenoss). Incorporating servers, networking, and Java programs is straightforward with SNMP and JMX. The question, however, is what are the best practices for monitoring and managing custom C++ applications in large, heterogenous (Solaris x86, RHEL Linux, Windows) environments? Possibilities I see are: Net SNMP Advantages single, central daemon on each server well-known standard easy integration into monitoring solutions we run Net SNMP daemons on our servers already Disadvantages: complex implementation (MIBs, Net SNMP library) new technology to introduce for the C++ developers rsyslog Advantages single, central daemon on each server well-known standard unknown integration into monitoring solutions (I know they can do alerts based on text, but how well would it work for sending telemetry like memory usage, queue depths, thread capacity, etc) simple implementation Disadvantages: possible integration issues somewhat new technology for C++ developers possible porting issues if we switch monitoring vendors probably involves coming up with an ad-hoc communication protocol (or using RFC5424 structured data; I don't know if Zenoss supports that without custom Zenpack coding) Embedded JMX (embed a JVM and use JNI) Advantages consistent management interface for both Java and C++ well-known standard easy integration into monitoring solutions somewhat simple implementation (we already do this today for other purposes) Disadvantages: complexity (JNI, thunking layer between native C++ and Java, basically writing the management code twice) possible stability problems requires a JVM in each process, using considerably more memory JMX is new technology for C++ developers each process has it's own JMX port (we run a lot of processes on each machine) Local JMX daemon, processes connect to it Advantages single, central daemon on each server consistent management interface for both Java and C++ well-known standard easy integration into monitoring solutions Disadvantages: complexity (basically writing the management code twice) need to find or write such a daemon need a protocol between the JMX daemon and the C++ process JMX is new technology for C++ developers CodeMesh JunC++ion Advantages consistent management interface for both Java and C++ well-known standard easy integration into monitoring solutions single, central daemon on each server when run in shared JVM mode somewhat simple implementation (requires code generation) Disadvantages: complexity (code generation, requires a GUI and several rounds of tweaking to produce the proxied code) possible JNI stability problems requires a JVM in each process, using considerably more memory (in embedded mode) Does not support Solaris x86 (deal breaker) Even if it did support Solaris x86, there are possible compiler compatibility issues (we use an odd combination of STLPort and Forte on Solaris each process has it's own JMX port when run in embedded mode (we run a lot of processes on each machine) possibly precludes a shared JMX server for non-C++ processes (?) Is there some reasonably standardized, simple solution I'm missing? Given no other reasonable solutions, which of these solutions is typically used for custom C++ programs? My gut feel is that Net SNMP is how people do this, but I'd like other's input and experience before I make a decision.

    Read the article

  • How do you test a command object in a grails controller integration test?

    - by egervari
    I'm new to grails. How do I test a form command object to make sure that it's working? Here's some setup code in a test. When I try to do it, I get the following exceptions: Error occurred creating command object. org.codehaus.groovy.grails.web.servlet.mvc.exceptions.ControllerExecutionException: Error occurred creating command object. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) .... Caused by: groovy.lang.MissingPropertyException: No such property: password for class: project.user.RegistrationForm Possible solutions: password Here is my test case. As you can see, I set "password" on the params map... void testSaveWhenDataIsCorrect() { controller.params.emailAddress = "[email protected]" controller.params.password = "secret" controller.params.confirmPassword = "secret" controller.save() assertEquals "success", redirectArgs.view ... } Here's the controller action, that adds the command object as a closure parameter: def save = { RegistrationForm form -> if(form.hasErrors()) { render view: "create", model: [form: form] } else { def user = new User(form.properties) user.password = form.encryptedPassword if(user.save()) { redirect(action: "success") } else { render view: "create", model: [form: form] } } } Here's the command object itself... and note that it DOES have a "password" field... class RegistrationForm { def springSecurityService String emailAddress String password String confirmPassword String getEncryptedPassword() { springSecurityService.encodePassword(password) } static constraints = { emailAddress(blank: false, email: true) password(blank: false, size:4..10) confirmPassword(blank: false, validator: { password != confirmPassword }) } } I'm totally lost in the non-intuitive way to do controllers... Please help.

    Read the article

  • Updated ODI Statement of Direction

    - by Robert Schweighardt
    An updated version of the Oracle Data Integration Statement of Direction is available. This document provides an overview of the strategic product plans for Oracle’s data integration products for bulk data movement and transformation, specifically Oracle Data Integrator (ODI) and Oracle Warehouse Builder (OWB). It is intended solely to help you assess the business benefits of investing in Oracle’s data integration solutions ...

    Read the article

  • Achieving decoupling in Model classes

    - by Guven
    I am trying to test-drive (or at least write unit tests) my Model classes but I noticed that my classes end up being too coupled. Since I can't break this coupling, writing unit tests is becoming harder and harder. To be more specific: Model Classes: These are the classes that hold the data in my application. They resemble pretty much the POJO (plain old Java objects), but they also have some methods. The application is not too big so I have around 15 model classes. Coupling: Just to give an example, think of a simple case of Order Header - Order Item. The header knows the item and the item knows the header (needs some information from the header for performing certain operations). Then, let's say there is the relationship between Order Item - Item Report. The item report needs the item as well. At this point, imagine writing tests for Item Report; you need have a Order Header to carry out the tests. This is a simple case with 3 classes; things get more complicated with more classes. I can come up with decoupled classes when I design algorithms, persistence layers, UI interactions, etc... but with model classes, I can't think of a way to separate them. They currently sit as one big chunk of classes that depend on each other. Here are some workarounds that I can think of: Data Generators: I have a package that generates sample data for my model classes. For example, the OrderHeaderGenerator class creates OrderHeaders with some basic data in it. I use the OrderHeaderGenerator from my ItemReport unit-tests so that I get an instance to OrderHeader class. The problem is these generators get complicated pretty fast and then I also need to test these generators; defeating the purpose a little bit. Interfaces instead of dependencies: I can come up with interfaces to get rid of the hard dependencies. For example, the OrderItem class would depend on the IOrderHeader interface. So, in my unit tests, I can easily mock the behaviour of an OrderHeader with a FakeOrderHeader class that implements the IOrderHeader interface. The problem with this approach is the complexity that the Model classes would end up having. Would you have other ideas on how to break this coupling in the model classes? Or, how to make it easier to unit-test the model classes?

    Read the article

  • Makes Sure To Learn About Oracle GoldenGate 12c

    - by Markus Weber
    Whether you use, or are interested in using, Oracle GoldenGate for real-time data integration database upgrades or migrations, or heterogeneous database replication the recently launched GoldenGate 12c release will certainly proof very interesting for you. To learn more about it, make sure to attend the upcoming webcast: In addition, there are several great blog entries over at the Oracle Data Integration blog: Oracle GoldenGate 12c - Leading Enterprise Replication Replicating between Cloud and On-Premises using Oracle GoldenGate Welcome Oracle Data Integration 12c: Simplified, Future-Ready Solutions with Extreme Performance 

    Read the article

  • Stylecop 4.7.36.0 is out!

    - by TATWORTH
    Stylecop 4.7.36.0 has been released at http://stylecop.codeplex.com/releases/view/79972This is an update to coincide with the latest ReSharper. The full fix list is:4.7.36.0 (508dbac00ffc)=======================Fix for 7344. Don't throw 1126 inside default expressions.Fix for 7371. Compare Namespace parts using the CurrentCulture and not InvariantCulture.Fix for 7386. Don't throw casing violations for filed names in languages that do not support case (like Chinese). Added new tests.fix for 7380. Catch Exception caused by CRM Toolkit.Update ReSharper 7.0 dependency to 7.0.1 (7.0.1098.2760)Fix for 7358. Use the RuleId in the call to MSBuild Logging.Fix for 7348. Update suggestion text for constructors.Fix for 7364. Don't throw 1126 for New Array Expressions.Fix for 7372. Throw 1126 inside catch blocks wasn't working. Add new tests.Fix for 7369. Await is allowed to be inside parenthesis. Add new tests.Fix testsCorrect styling issues.Fix for 7373. Typeparam violations were not being thrown in all cases. Added new tests.Fix for 7361. Rule 1120 was logging against the root element and so Suppressions wouldn't work. Fixed and added tests.Updating de-DE resources - from Michael Diermeier - thank you.Change for 7368. Add the violation count into the Task outputs.Fix for 7383. Fix for memory leak in plugins.Update environment to detect ReSharper 7Fix for 7378. Null reference exception from command line run in message output.Update release history.

    Read the article

  • Is deserializing complex objects instead of creating them a good idea, in test setup?

    - by Chris Bye
    I'm writing tests for a component that takes very complex objects as input. These tests are mixes of tests against already existing components, and test-first tests for new features. Instead of re-creating my input objects (this would be a large chunk of code) or reading one from our data store, I had the thought to serialize a live instance of one of these objects, and just deserialize it into test setup. I can't decide if this is a reasonable idea that will save effort in long run, or whether it's the worst idea that I've ever had, causing those that will maintain this code will hunt me down as soon as they read it. Is deserialization of inputs a valid means of test setup in some cases? To give a sense of scale of what I'm dealing with, the size of serialization output for one of these input objects is 93KB. Obtained by, in C#: new BinaryFormatter().Serialize((Stream)fileStream, myObject);

    Read the article

  • How to run SpecFlow tests in Visual Studio 2010?

    - by testerboy
    Trying to get SpecFlow running with a fresh VS2010 Professional install. Created a new console application and added references to NUnit and SpecFlow. Created a SpecFlow feature. The .feature with the default template code is created. Now I try to run this test, but I don't understand how. When I right-click the project (at the top-level), there is no "Run test(s)" option in the mouse drop down menu. Didn't the SpecFlow install correctly, am I missing some references or some other tool I need to install?

    Read the article

  • How to unit test models in MVC / MVR app?

    - by BBnyc
    I'm building a node.js web app and am trying to do so for the first time in a test driven fashion. I'm using nodeunit for testing, which I find allows me to write tests quickly and painlessly. In this particular app, the heavy lifting primarily involves translating SQL data into complex Javascript object and serving them to the front-end via json. Likewise, the app also spends a great deal of code validating and translating complex, multidimensional Javascript objects it receives from the front-end into SQL rows. Hence I have used a fat model design for the app -- most of the real code resides in the models, where the data translation happens. What's the best approach to test such models with unit tests? I mean in particular the methods that have create javascript objects from the SQL rows and serve them to the front-end. Right now what I'm doing is making particular requests of my models with the unit tests and checking the returned data for all of the fields that should be there. However I have a suspicion that this is not the most robust kind of testing I could be doing. My current testing design also means I have to package my app code with some dummy data so that my tests can anticipate the kind of data that the app should be returning when tests run.

    Read the article

  • Should developers be involved in testing phases?

    - by LudoMC
    Hi, we are using a classical V-shaped development process. We then have requirements, architecture, design, implementation, integration tests, system tests and acceptance. Testers are preparing test cases during the first phases of the project. The issue is that, due to resources issues (*), test phases are too long and are often shortened due to time constraints (you know project managers... ;)). So my question is simple: should developers be involved in the tests phases and isn't it too 'dangerous'. I'm afraid it will give the project managers a false feeling of better quality as the work has been done but would the added man.days be of any value? I'm not really confident of developers doing tests (no offense here but we all know it's quite hard to break in a few clicks what you have made in severals days). Thanks for sharing your thoughts. (*) For obscure reasons, increasing the number of testers is not an option as of today. (Just upfront, it's not a duplicate of Should programmers help testers in designing tests? which talks about test preparation and not test execution, where we avoid the implication of developers)

    Read the article

  • Groovy support in Java EE projects

    - by Martin Janicek
    As requested in the issue 144038, I've implemented support for Groovy in a Java enterprise projects. You should be able to combine Java/Groovy files, run them and thanks to the new Groovy JUnit tests support you can also run groovy tests together with your existing Java tests. I hope it will make your enterprise development (and especially enterprise testing) easier and more productive. Note: The changes will be propagated to the NetBeans daily build in a few days, so please stay in touch!

    Read the article

  • Can I configure Eclipse / JBoss integration so it does not rely on deploying Jars to the "server/def

    - by kellyfj
    I am using MyEclipse 7.5 with JBoss 4.2.3 GA. When I define my local development JBoss server in MyEclipse it always wants to deploy jars, wars etc. to the "server/default/deploy" directory. Unfortunately our JBoss directory structure for production is "server/XYZ/deploy/abc" (driven by a third party). As a result our Dev JBoss instances are different from our QA/Staging/Production JBoss instances. Is there a way to configure Eclipse to use JBoss but deploy to that specific folder path "server/XYZ/deploy/abc" rather than the default one "server/default/deploy"?

    Read the article

  • Is there a Java unit-test framework that auto-tests getters and setters?

    - by Michael Easter
    There is a well-known debate in Java (and other communities, I'm sure) whether or not trivial getter/setter methods should be tested. Usually, this is with respect to code coverage. Let's agree that this is an open debate, and not try to answer it here. There have been several blog posts on using Java reflection to auto-test such methods. Does any framework (e.g. jUnit) provide such a feature? e.g. An annotation that says "this test T should auto-test all the getters/setters on class C, because I assert that they are standard". It seems to me that it would add value, and if it were configurable, the 'debate' would be left as an option to the user.

    Read the article

  • Make audible Ding! sound, or growl notification, when `rake test` finishes!

    - by Jordan Feldstein
    I lose a ton of productivity by getting distracted while waiting for my tests to run. Usually, I'll start to look at something while they're loading --- and 15-20 minutes later I realize my tests are long done, and I've spent 10 minutes reading online. Make a small change... rerun tests ... another 10-15 minutes wasted! How can I make my computer make some kind of alert (Sound or growl notification) when my tests finish, so I can snap back to what I was doing??

    Read the article

  • Agile team with no dedicated Tester members. Insane or efficient?

    - by MetaFight
    I'm a software developer. I've been thinking a lot about the efficiency of the Software Testers I've worked with so far in my career. In fact, I've been thinking a lot about the Software Testers role in general and have reached a potentially contentious conclusion: Non-developer Software Testers staff are less efficient at software testing than developers. Now, before everyone gets upset, hear me out. This isn't mere opinion: Software Testing and Software Development both require a lot of skills in common: Problem solving Thinking about corner cases Analytical skills The ability to define clear and concise step-by-step scenarios What developers have in addition to this is the ability to automate their tests. Yes, I know non-dev testers can automate their tests too, but that often then becomes a test maintenance issue. Because automating UI tests is essentially programming, non-dev members encounter all the same difficulties software developers encounter: Copy-pasta, lack of code reusibility/maintainability, etc. So, I was wondering. Why not replace all non-dev roles with developer roles? Developers have the skills required to perform Software Testing tasks, and they have the skills to automate tests and keep them maintainable. Would the following work: Hire a bunch of developers and split them into 2 roles: Software developers Software developers doing testing (some manual, mostly automated by writing integration tests, unit tests, etc) Software developers doing application support. (I've removed this as it is probably a separate question altogether) And, in our case since we're doing Agile development, rotate the roles every sprint or two. Also, if at all possible, try to have people spend their Developer stints and Testing stints on different projects. Ideally you would want to reduce the turnover rate per rotation. So maybe you could have 2 groups and make sure the rotation cycles of the groups are elided. So, for example, if each rotation was two sprints long, the two groups would have their rotations 1 sprint apart. That way there's only a 50% turn-over rate per sprint. Am I crazy, or could this work? (Obviously a key component to this working is that all devs want to be in the 3 roles. Let's assume I'm starting a new company and I can hire these ideal people) Edit I've removed the phrase "QA", as apparently we are using it incorrectly where I work.

    Read the article

  • How best to organize projects folders for unit tests in .NET?

    - by Dan Bailiff
    So I'm trying to introduce unit testing to my group. I've successfully upgraded a VS'05 web site project to a VS'08 web application, and now have a solution with the web app project and a unit test project. The issue now is how to fit this back into the source repository such that we don't break the build system and the unit test projects are persisted as well. Right now we have something like this: c:\root c:\root\projectA c:\root\projectB c:\root\projectC where projectA contains the sln file and all other related files/folders for the project. Now I have this new solution that looks like this: c:\root\projectA (parent folder) c:\root\projectA\projectA (the production code project) c:\root\projectA\projectA_Test (the unit test project) c:\root\projectA\TestResults c:\root\projecta\projectA.sln How do I integrate this new structure back into the code repository? I'd really prefer to keep the production code folder where it was in the source repository for the sake of the build, but is this necessary? If I keep the production code project in its usual place then where do I keep my unit test projects and how do I connect them with a sln file? Is it better to use this new structure and adjust the build process? I'd love to hear how other people are dealing with this issue of upgrading legacy projects to unit testing.

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >