Search Results

Search found 28325 results on 1133 pages for 'test cases'.

Page 82/1133 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • What is Test Driven Development? Does it require to have initial designs?

    - by Nirajan Singh
    Hello Everybody, I am very new to TDD, not yet started using it. But i know that we have to write test first and then actual code to pass the test and refactor it till good design. My concern over TDD is that where does it fit in our SDLC. Suppose i get a requirement of making order processing system. Now, without having any model & design of this system, how can i start writing test. Shouldn't we require to define the entities & its attribute to proceed. If not, is it possible to develop big system without any design. I am really very confused over it. Can anyone help me to start TDD. Thanks in advance.

    Read the article

  • How do we name test methods where we are checking for more than one condition?

    - by Sandbox
    I follow the technique specified in Roy Osherove's The Art Of Unit Testing book while naming test methods - MethodName_Scenario_Expectation. It suits perfectly well for my 'unit' tests. But,for tests that I write in 'controller' or 'coordinator' class, there isn't necessarily a method which I want to test. For these tests, I generate multiple conditions which make up one scenario and then I verify the expectation. For example, I may set some properties on different instances, generate an event and then verify that my expectation from controller/coordinator is being met. Now, my controller handles events using a private event handler. Here my scenario is that, I set some properties, say 3 condition1,condition2 and condition3 Also, my scenario includes an event is raised I don't have a method name as my event handler is private. How do I name such a test method?

    Read the article

  • Is there a Java unit-test framework that auto-tests getters and setters?

    - by Michael Easter
    There is a well-known debate in Java (and other communities, I'm sure) whether or not trivial getter/setter methods should be tested. Usually, this is with respect to code coverage. Let's agree that this is an open debate, and not try to answer it here. There have been several blog posts on using Java reflection to auto-test such methods. Does any framework (e.g. jUnit) provide such a feature? e.g. An annotation that says "this test T should auto-test all the getters/setters on class C, because I assert that they are standard". It seems to me that it would add value, and if it were configurable, the 'debate' would be left as an option to the user.

    Read the article

  • How to configure .NET test assembly to use website web.config?

    - by Morten Christiansen
    I've run into a problem setting up Selenium tests for an ASP.NET MVC project in cases where I need the settings provided in the web.config of the site under test. The problem is that I want to create a dummy user before running the test and this causes an error saying that the password-answer supplied is invalid. This is due to the test assembly not using the web.config, instead using default values for membership configuration. I've tried to copy the relevant section (membership configuration) into the app.config of the assembly without luck, but I admit I'm just grasping at straws here.

    Read the article

  • Java Instance of: Supertypes and Subtypes seem to be equal? How to test exactly for Type?

    - by jens
    I need to test, if an instance is exactly of a given type. But it seems that instanceof returns true also if the subtype is tested for the supertype (case 3). I never knew this before and I am quite surprised. Am I doing something wrong here? How do I exactly test for a given type? //.. class DataSourceEmailAttachment extends EmailAttachment //... EmailAttachment emailAttachment = new EmailAttachment(); DataSourceEmailAttachment emailAttachmentDS = new DataSourceEmailAttachment(); if (emailAttachment instanceof EmailAttachment){ System.out.println(" 1"); } if (emailAttachment instanceof DataSourceEmailAttachment){ System.out.println(" 2"); } if (emailAttachmentDS instanceof EmailAttachment){ System.out.println(" 3 "); } if (emailAttachmentDS instanceof DataSourceEmailAttachment){ System.out.println(" 4"); } RESULT: 1 3 4 I want to avoid case 3, I only want "exact matches" (case 1 and 4) how do I test for them?

    Read the article

  • StructureMap: How can i unit test the registry class?

    - by Marius
    I have a registry class like this: public class StructureMapRegistry : Registry { public StructureMapRegistry() { For<IDateTimeProvider>().Singleton().Use<DateTimeProviderReturningDateTimeNow>(); } I want to test that the configuration is according to my intent, so i start writing a test: public class WhenConfiguringIOCContainer : Scenario { private TfsTimeMachine.Domain.StructureMapRegistry registry; private Container container; protected override void Given() { registry = new TfsTimeMachine.Domain.StructureMapRegistry(); container = new Container(); } protected override void When() { container.Configure(i => i.AddRegistry(registry)); } [Then] public void DateTimeProviderIsRegisteredAsSingleton() { // I want to say "verify that the container contains the expected type and that the expected type // is registered as a singleton } } How can verify that the registry is accoring to my expectations? Note: I introduced the container because I didn't see any sort of verification methods available on the Registry class. Idealy, I want to test on the registry class directly.

    Read the article

  • Is it a bad idea to create tests that rely on each other within a test fixture?

    - by nbolton
    For example: // NUnit-like pseudo code (within a TestFixture) Ctor() { m_globalVar = getFoo(); } [Test] Create() { a(m_globalVar) } [Test] Delete() { // depends on Create being run b(m_globalVar) } … or… // NUnit-like pseudo code (within a TestFixture) [Test] CreateAndDelete() { Foo foo = getFoo(); a(foo); // depends on Create being run b(foo); } … I’m going with the later, and assuming that the answer to my question is: No, at least not with NUnit, because according to the NUnit manual: The constructor should not have any side effects, since NUnit may construct the class multiple times in the course of a session. ... also, can I assume it's bad practice in general? Since tests can usually be run separately. So the result of Create may never be cleaned up by Delete.

    Read the article

  • Is there a way to test my nonce validation fails when it should?

    - by MrsLannister
    I'm using nonce validation in a wordpress plugin. When I submit the form from the admin menu it processes correctly, so I believe the nonce validation is working. What I'm not sure is if the validation will fail when it is supposed to and I don't know what the best way to test this is. I tried putting the url for the php file in directly, but all it does it take me to a wordpress not found page. Is there some recommended way to test this? Here is my code. Again, the test passes when it is supposed to, I just don't know if it fails when it is supposed to. if ( !wp_verify_nonce( $ecbs_post_data['_wpnonce'], 'ecbs-edit-templates' ) ) { wp_die( __( 'You do not have permission to update this page.' ) ); }

    Read the article

  • link nasm program for mac os x

    - by Fry Constantine
    i have some problems with linking nasm program for macos: GLOBAL _start SEGMENT .text _start: mov ax, 5 mov bx, ax mov [a], ebx SEGMENT .data a DW 0 t2 DW 0 fry$ nasm -f elf test.asm fry$ ld -o test test.o -arch i386 ld: warning: in test.o, file was built for unsupported file format which is not the architecture being linked (i386) ld: could not find entry point "start" (perhaps missing crt1. fry$ nasm -f macho test.asm fry$ ld -o test test.o -arch i386 ld: could not find entry point "start" (perhaps missing crt1.o) can anyone help me?

    Read the article

  • How to get a html elements with python lxml

    - by Damiano
    Hello! I have this html code: <table> <tr> <td class="test"><b><a href="">aaa</a></b></td> <td class="test">bbb</td> <td class="test">ccc</td> <td class="test"><small>ddd</small></td> </tr> <tr> <td class="test"><b><a href="">eee</a></b></td> <td class="test">fff</td> <td class="test">ggg</td> <td class="test"><small>hhh</small></td> </tr> </table> I use this Python code to extract all <td class="test"> with lxml module. import urllib2 import lxml.html code = urllib.urlopen("http://www.example.com/page.html").read() html = lxml.html.fromstring(code) result = html.xpath('//td[@class="test"][position() = 1 or position() = 4]') It works good! The result is: <td class="test"><b><a href="">aaa</a></b></td> <td class="test"><small>ddd</small></td> <td class="test"><b><a href="">eee</a></b></td> <td class="test"><small>hhh</small></td> (so the first and the fourth column of each <tr>) Now, I have to extract: aaa (the title of the link) ddd (text between <small> tag) eee (the title of the link) hhh (text between <small> tag) How could I extract these values? (the problem is that I have to remove <b> tag and get the title of the anchor on the first column and remove <small> tag on the forth column) Thank you!

    Read the article

  • ignoring folders in mercurial

    - by damian
    Caveat: I try all the posibilities listed here: http://stackoverflow.com/questions/254002/how-can-i-ignore-everything-under-a-folder-in-mercurial. None works as I hope. I want to ignore every thing under the folder test. But not ignore srcProject\test\TestManager I try syntax: glob test/** And it ignores test and srcProject\test\TestManager With: syntax: regexp ^/test/ It's the same thing. Also with: syntax: regexp test\\* I have install TortoiseHG 0.4rc2 with Mercurial-626cb86a6523+tortoisehg, Python-2.5.1, PyGTK-2.10.6, GTK-2.10.11 in Windows

    Read the article

  • Django url tag multiple parameters

    - by Overdose
    I have two similar codes. The first one works as expected. urlpatterns = patterns('', (r'^(?P<n1>\d)/test/', test), (r'', test2), {% url testapp.views.test n1=5 %} But adding the second parameter makes the result return empty string. urlpatterns = patterns('', (r'^(?P<n1>\d)/test(?P<n2>\d)/', test), (r'', test2),) {% url testapp.views.test n1=5, n2=2 %} Views signature: def test(request, n1, n2=1):

    Read the article

  • How to do a case sensitive GROUP BY?

    - by Abe Miessler
    If I execute the code below: with temp as ( select 'Test' as name UNION ALL select 'TEST' UNION ALL select 'test' UNION ALL select 'tester' UNION ALL select 'tester' ) SELECT name, COUNT(name) FROM temp group by name It returns the results: TEST 3 tester 2 Is there a way to have the group by be case sensitive so that the results would be: Test 1 TEST 1 test 1 tester 2

    Read the article

  • Find all occurrences of a substring in Python

    - by cru3l
    Python has string.find() and string.rfind() to get the index of a substring in string. I wonder, maybe there is something like string.find_all() which can return all founded indexes (not only first from beginning or first from end)? For example: string = "test test test test" print string.find('test') # 0 print string.rfind('test') # 15 #that's the goal print string.find_all('test') # [0,5,10,15]

    Read the article

  • Generate data in Excel using Macros?

    - by RD
    I need to create a table with the following structure: Applicant | Test 1 | Test 2 | Test 3 | Test 4 | Test 5 | Test 6 | 1 | A | C | D | E | F | B | 2 | C | B | A | E | D | F | 3 | C | A | F | G | B | D | .... | | | | | | | Basically, test 1 - 6 can be any letter between A and F. I want a Macro (or some other method) by which I can generate this table, with 200 applicants, where the tests are completely randomised. Anyone know how to do this?

    Read the article

  • Measuring Usability with Common Industry Format (CIF) Usability Tests

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience A User-centered Research and Design Process The Oracle Fusion Applications user experience was five years in the making. The development of this suite included an extensive and comprehensive user experience design process: ethnographic research, low-fidelity workflow prototyping, high fidelity user interface (UI) prototyping, iterative formative usability testing, development feedback and iteration, and sales and customer evaluation throughout the design cycle. However, this process does not stop when our products are released. We conduct summative usability testing using the ISO 25062 Common Industry Format (CIF) for usability test reports as an organizational framework. CIF tests allow us to measure the overall usability of our released products.  These studies provide benchmarks that allow for comparisons of a specific product release against previous versions of our product and against other products in the marketplace. What Is a CIF Usability Test? CIF refers to the internationally standardized method for reporting usability test findings used by the software industry. The CIF is based on a formal, lab-based test that is used to benchmark the usability of a product in terms of human performance and subjective data. The CIF was developed and is endorsed by more than 375 software customer and vendor organizations led by the National Institute for Standards and Technology (NIST), a US government entity. NIST sponsored the CIF through the American National Standards Institute (ANSI) and International Organization for Standardization (ISO) standards-making processes. Oracle played a key role in developing the CIF. The CIF report format and metrics are consistent with the ISO 9241-11 definition of usability: “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” Our goal in conducting CIF tests is to measure performance and satisfaction of a representative sample of users on a set of core tasks and to help predict how usable a product will be with the larger population of customers. Why Do We Perform CIF Testing? The overarching purpose of the CIF for usability test reports is to promote incorporation of usability as part of the procurement decision-making process for interactive products. CIF provides a common format for vendors to report the methods and results of usability tests to customer organizations, and enables customers to compare the usability of our software to that of other suppliers. CIF also enables us to compare our current software with previous versions of our software. CIF Testing for Fusion Applications Oracle Fusion Applications comprises more than 100 modules in seven different product families. These modules encompass more than 400 task flows and 400 user roles. Due to resource constraints, we cannot perform comprehensive CIF testing across the entire product suite. Therefore, we had to develop meaningful inclusion criteria and work with other stakeholders across the applications development organization to prioritize product areas for testing. Ultimately, we want to test the product areas for which customers might be most interested in seeing CIF data. We also want to build credibility with customers; we need to be able to make the case to current and prospective customers that the product areas tested are representative of the product suite as a whole. Our goal is to test the top use cases for each product. The primary activity in the scoping process was to work with the individual product teams to identify the key products and business process task flows in each product to test. We prioritized these products and flows through a series of negotiations among the user experience managers, product strategy, and product management directors for each of the primary product families within the Oracle Fusion Applications suite (Human Capital Management, Supply Chain Management, Customer Relationship Management, Financials, Projects, and Procurement). The end result of the scoping exercise was a list of 47 proposed CIF tests for the Fusion Applications product suite.  Figure 1. A participant completes tasks during a usability test in Oracle’s Usability Labs Fusion Supplier Portal CIF Test The first Fusion CIF test was completed on the Supplier Portal application in July of 2011.  Fusion Supplier Portal is part of an integrated suite of Procurement applications that helps supplier companies manage orders, schedules, shipments, invoices, negotiations and payments. The user roles targeted for the usability study were Supplier Account Receivables Specialists and Supplier Sales Representatives, including both experienced and inexperienced users across a wide demographic range.  The test specifically focused on the following functionality and features: Manage payments – view payments Manage invoices – view invoice status and create invoices Manage account information – create new contact, review bank account information Manage agreements – find and view agreement, upload agreement lines, confirm status of agreement lines upload Manage purchase orders (PO) – view history of PO, request change to PO, find orders Manage negotiations – respond to request for a quote, check the status of a negotiation response These product areas were selected to represent the most important subset of features and functionality of the flow, in terms of frequency and criticality of use by customers. A total of 20 users participated in the usability study. The results of the Supplier Portal evaluation were favorable and exceeded our expectations. Figure 2. Fusion Supplier Portal Next Studies We plan to conduct two Fusion CIF usability studies per product family over the next nine months. The next product to be tested will be Self-service Procurement. End users are currently being recruited to participate in this usability study, and the test sessions are scheduled to begin during the last week of November.

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • can't run cucumber scenarios due to test-unit version issue on Rails 2.3.5, Ruby 1.9.1

    - by Jeff D
    I've been trying to follow along in the RSpec book, (I'm new to all of this) and I have what appears to be some kind of versioning issue. If I try and run some simple scenarios, I get this error: can't activate test-unit (= 1.2.3, runtime) for [], already activated test-unit-2.0.7 for [] (Gem::LoadError) /Users/jeffdeville/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/site_ruby/1.9.1/rubygems.rb:230:in activate' /Users/jeffdeville/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/site_ruby/1.9.1/rubygems.rb:1056:ingem' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/rspec-1.3.0/lib/spec/interop/test.rb:4:in <top (required)>' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:in require' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/rspec-1.3.0/lib/spec/test/unit.rb:1:in <top (required)>' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:in require' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/rspec-rails-1.3.2/lib/spec/rails.rb:13:in <top (required)>' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:in require' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-rails-0.3.0/lib/cucumber/rails/rspec.rb:15:in rescue in <top (required)>' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-rails-0.3.0/lib/cucumber/rails/rspec.rb:3:in' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:in require' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in require' /Users/jeffdeville/code/showtime/Features/support/env.rb:11:in' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:in require' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/lib/cucumber/rb_support/rb_language.rb:124:in load_code_file' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/lib/cucumber/step_mother.rb:85:inload_code_file' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/lib/cucumber/step_mother.rb:77:in block in load_code_files' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/lib/cucumber/step_mother.rb:76:ineach' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/lib/cucumber/step_mother.rb:76:in load_code_files' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/lib/cucumber/cli/main.rb:48:inexecute!' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/lib/cucumber/cli/main.rb:20:in execute' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/bin/cucumber:8:in' script/cucumber:9:in load' script/cucumber:9:in' however, uninstalling 2.0.7 yields the error: Missing these required gems: test-unit = 2.0.7 You're running: ruby 1.9.1.378 at /Users/jeffdeville/.rvm/rubies/ruby-1.9.1-p378/bin/ruby rubygems 1.3.6 at /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378, /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378@global Run rake gems:install to install the missing gems. Sorry this is probably something easy, but I just don't know ruby or rails well enough yet.

    Read the article

  • Java: how to do fast copy of a BufferedImage's pixels? (include unit test)

    - by WizardOfOdds
    I want to do a copy (of a rectangle area) of the ARGB values from a source BufferedImage into a destination BufferedImage. No compositing should be done: if I copy a pixel with an ARGB value of 0x8000BE50 (alpha value at 128), then the destination pixel must be exactly 0x8000BE50, totally overriding the destination pixel. I've got a very precise question and I made a unit test to show what I need. The unit test is fully functional and self-contained and is passing fine and is doing precisely what I want. However, I want a faster and more memory efficient method to replace copySrcIntoDstAt(...). That's the whole point of my question: I'm not after how to "fill" the image in a faster way (what I did is just an example to have a unit test). All I want is to know what would be a fast and memory efficient way to do it (ie fast and not creating needless objects). The proof-of-concept implementation I've made is obviously very memory efficient, but it is slow (doing one getRGB and one setRGB for every pixel). Schematically, I've got this: (where A indicates corresponding pixels from the destination image before the copy) AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA And I want to have this: AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAABBBBAAA AAAAAAAAAAAAABBBBAAA AAAAAAAAAAAAAAAAAAAA where 'B' represents the pixels from the src image. I'm looking for an exact replacement of the method, not for an API link/quote. import org.junit.Test; import java.awt.image.BufferedImage; import static org.junit.Assert.*; public class TestCopy { private static final int COL1 = 0x8000BE50; // alpha at 128 private static final int COL2 = 0x1732FE87; // alpha at 23 @Test public void testPixelsCopy() { final BufferedImage src = new BufferedImage( 5, 5, BufferedImage.TYPE_INT_ARGB ); final BufferedImage dst = new BufferedImage( 20, 20, BufferedImage.TYPE_INT_ARGB ); convenienceFill( src, COL1 ); convenienceFill( dst, COL2 ); copySrcIntoDstAt( src, dst, 3, 4 ); for (int x = 0; x < dst.getWidth(); x++) { for (int y = 0; y < dst.getHeight(); y++) { if ( x >= 3 && x <= 7 && y >= 4 && y <= 8 ) { assertEquals( COL1, dst.getRGB(x,y) ); } else { assertEquals( COL2, dst.getRGB(x,y) ); } } } } // clipping is unnecessary private static void copySrcIntoDstAt( final BufferedImage src, final BufferedImage dst, final int dx, final int dy ) { // TODO: replace this by a much more efficient method for (int x = 0; x < src.getWidth(); x++) { for (int y = 0; y < src.getHeight(); y++) { dst.setRGB( dx + x, dy + y, src.getRGB(x,y) ); } } } // This method is just a convenience method, there's // no point in optimizing this method, this is not what // this question is about private static void convenienceFill( final BufferedImage bi, final int color ) { for (int x = 0; x < bi.getWidth(); x++) { for (int y = 0; y < bi.getHeight(); y++) { bi.setRGB( x, y, color ); } } } }

    Read the article

  • How to Use USER_DEFINED Activity in OWB Process Flow

    - by Jinggen He
    Process Flow is a very important component of Oracle Warehouse Builder. With Process Flow, we can create and control the ETL process by setting all kinds of activities in a well-constructed flow. In Oracle Warehouse Builder 11gR2, there are 28 kinds of activities, which fall into three categories: Control activities, OWB specific activities and Utility activities. For more information about Process Flow activities, please refer to OWB online doc. Most of those activities are pre-defined for some specific use. For example, the Mapping activity allows execution an OWB mapping in Process Flow and the FTP activity allows an interaction between the local host and a remote FTP server. Besides those activities for specific purposes, the User Defined activity enables you to incorporate into a Process Flow an activity that is not defined within Warehouse Builder. So the User Defined activity brings flexibility and extensibility to Process Flow. In this article, we will take an amazing tour of using the User Defined activity. Let's start. Enable execution of User Defined activity Let's start this section from creating a very simple Process Flow, which contains a Start activity, a User Defined activity and an End Success activity. Leave all parameters of activity USER_DEFINED unchanged except that we enter /tmp/test.sh into the Value column of the COMMAND parameter. Then let's create the shell script test.sh in /tmp directory. Here is the content of /tmp/test.sh (this article is demonstrating a scenario in Linux system, and /tmp/test.sh is a Bash shell script): echo Hello World! > /tmp/test.txt Note: don't forget to grant the execution privilege on /tmp/test.sh to OS Oracle user. For simplicity, we just use the following command. chmod +x /tmp/test.sh OK, it's so simple that we’ve almost done it. Now deploy the Process Flow and run it. For a newly installed OWB, we will come across an error saying "RPE-02248: For security reasons, activity operator Shell has been disabled by the DBA". See below. That's because, by default, the User Defined activity is DISABLED. Configuration about this can be found in <ORACLE_HOME>/owb/bin/admin/Runtime.properties: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint=DISABLED The property can be set to three different values: NATIVE_JAVA, SCHEDULER and DISBALED. Where NATIVE_JAVA uses the Java 'Runtime.exec' interface, SCHEDULER uses a DBMS Scheduler external job submitted by the Control Center repository owner which is executed by the default operating system user configured by the DBA. DISABLED prevents execution via these operators. We enable the execution of User Defined activity by setting: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint= NATIVE_JAVA Restart the Control Center service for the change of setting to take effect. cd <ORACLE_HOME>/owb/rtp/sql sqlplus OWBSYS/<password of OWBSYS> @stop_service.sql sqlplus OWBSYS/<password of OWBSYS> @start_service.sql And then run the Process Flow again. We will see that the Process Flow completes successfully. The execution of /tmp/test.sh successfully generated a file /tmp/test.txt, containing the line Hello World!. Pass parameters to User Defined Activity The Process Flow created in the above section has a drawback: the User Defined activity doesn't accept any information from OWB nor does it give any meaningful results back to OWB. That's to say, it lacks interaction. Maybe, sometimes such a Process Flow can fulfill the business requirement. But for most of the time, we need to get the User Defined activity executed according to some information prior to that step. In this section, we will see how to pass parameters to the User Defined activity and pass them into the to-be-executed shell script. First, let's see how to pass parameters to the script. The User Defined activity has an input parameter named PARAMETER_LIST. This is a list of parameters that will be passed to the command. Parameters are separated from one another by a token. The token is taken as the first character on the PARAMETER_LIST string, and the string must also end in that token. Warehouse Builder recommends the '?' character, but any character can be used. For example, to pass 'abc,' 'def,' and 'ghi' you can use the following equivalent: ?abc?def?ghi? or !abc!def!ghi! or |abc|def|ghi| If the token character or '\' needs to be included as part of the parameter, then it must be preceded with '\'. For example '\\'. If '\' is the token character, then '/' becomes the escape character. Let's configure the PARAMETER_LIST parameter as below: And modify the shell script /tmp/test.sh as below: echo $1 is saying hello to $2! > /tmp/test.txt Re-deploy the Process Flow and run it. We will see that the generated /tmp/test.txt contains the following line: Bob is saying hello to Alice! In the example above, the parameters passed into the shell script are static. This case is not so useful because: instead of passing parameters, we can directly write the value of the parameters in the shell script. To make the case more meaningful, we can pass two dynamic parameters, that are obtained from the previous activity, to the shell script. Prepare the Process Flow as below: The Mapping activity MAPPING_1 has two output parameters: FROM_USER, TO_USER. The User Defined activity has two input parameters: FROM_USER, TO_USER. All the four parameters are of String type. Additionally, the Process Flow has two string variables: VARIABLE_FOR_FROM_USER, VARIABLE_FOR_TO_USER. Through VARIABLE_FOR_FROM_USER, the input parameter FROM_USER of USER_DEFINED gets value from output parameter FROM_USER of MAPPING_1. We achieve this by binding both parameters to VARIABLE_FOR_FROM_USER. See the two figures below. In the same way, through VARIABLE_FOR_TO_USER, the input parameter TO_USER of USER_DEFINED gets value from output parameter TO_USER of MAPPING_1. Also, we need to change the PARAMETER_LIST of the User Defined activity like below: Now, the shell script is getting input from the Mapping activity dynamically. Deploy the Process Flow and all of its necessary dependees then run the Process Flow. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! 'USER B' and 'USER A' are two outputs of the Mapping execution. Write the shell script within Oracle Warehouse Builder In the previous section, the shell script is located in the /tmp directory. But sometimes, when the shell script is small, or for the sake of maintaining consistency, you may want to keep the shell script inside Oracle Warehouse Builder. We can achieve this by configuring these three parameters of a User Defined activity properly: COMMAND: Set the path of interpreter, by which the shell script will be interpreted. PARAMETER_LIST: Set it blank. SCRIPT: Enter the shell script content. Note that in Linux the shell script content is passed into the interpreter as standard input at runtime. About how to actually pass parameters to the shell script, we can utilize variable substitutions. As in the following figure, ${FROM_USER} will be replaced by the value of the FROM_USER input parameter of the User Defined activity. So will the ${TO_USER} symbol. Besides the custom substitution variables, OWB also provide some system pre-defined substitution variables. You can refer to the online document for that. Deploy the Process Flow and run it. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! Leverage the return value of User Defined activity All of the previous sections are connecting the User Defined activity to END_SUCCESS with an unconditional transition. But what should we do if we want different subsequent activities for different shell script execution results? 1.  The simplest way is to add three simple-conditioned out-going transitions for the User Defined activity just like the figure below. In the figure, to simplify the scenario, we connect the User Defined activity to three End activities. Basically, if the shell script ends successfully, the whole Process Flow will end at END_SUCCESS, otherwise, the whole Process Flow will end at END_ERROR (in our case, ending at END_WARNING seldom happens). In the real world, we can add more complex and meaningful subsequent business logic. 2.  Or we can utilize complex conditions to work with different results of the User Defined activity. Previously, in our script, we only have this line: echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt We can add more logic in it and return different values accordingly. echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt if CONDITION_1 ; then ...... exit 0 fi if CONDITION_2 ; then ...... exit 2 fi if CONDITION_3 ; then ...... exit 3 fi After that we can leverage the result by checking RESULT_CODE in condition expression of those out-going transitions. Let's suppose that we have the Process Flow as the following graph (SUB_PROCESS_n stands for more different further processes): We can set complex condition for the transition from USER_DEFINED to SUB_PROCESS_1 like this: Other transitions can be set in the same way. Note that, in our shell script, we return 0, 2 and 3, but not 1. As in Linux system, if the shell script comes across a system error like IO error, the return value will be 1. We can explicitly handle such a return value. Summary Let's summarize what has been discussed in this article: How to create a Process Flow with a User Defined activity in it How to pass parameters from the prior activity to the User Defined activity and finally into the shell script How to write the shell script within Oracle Warehouse Builder How to do variable substitutions How to let the User Defined activity return different values and in what way can we leverage

    Read the article

  • How do I create a Linked Server in SQL Server 2005 to a password protected Access 95 database?

    - by Brad Knowles
    I need to create a linked server with SQL Server Management Studio 2005 to an Access 95 database, which happens to be password protected at the database level. User level security has not been implemented. I cannot convert the Access database to a newer version. It is being used by a 3rd party application; so modifying it, in any way, is not allowed. I've tried using the Jet 4.0 OLE DB Provider and the ODBC OLE DB Provider. The 3rd party application creates a System DSN (with the proper database password), but I've not had any luck in using either method. If I were using a standard connection string, I think it would look something like this: Provider=Microsoft.Jet.OLEDB.4.0;Data Source='C:\Test.mdb';Jet OLEDB:Database Password=####; I'm fairly certain I need to somehow incorporate Jet OLEDB:Database Password into the linked server setup, but haven't figured out how. I've posted the scripts I'm using along with the associated error messages below. Any help is greatly appreciated. I'll provide more details if needed, just ask. Thanks! Method #1 - Using the Jet 4.0 Provider When I try to run these statements to create the linked server: sp_dropserver 'Test', 'droplogins'; EXEC sp_addlinkedserver @server = N'Test', @provider = N'Microsoft.Jet.OLEDB.4.0', @srvproduct = N'Access DB', @datasrc = N'C:\Test.mdb' GO EXEC sp_addlinkedsrvlogin @rmtsrvname=N'Test', @useself=N'False',@locallogin=NULL, @rmtuser=N'Admin', @rmtpassword='####' GO I get this error when testing the connection: TITLE: Microsoft SQL Server Management Studio ------------------------------ "The test connection to the linked server failed." ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ The OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test" reported an error. Authentication failed. Cannot initialize the data source object of OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test". OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test" returned message "Cannot start your application. The workgroup information file is missing or opened exclusively by another user.". (Microsoft SQL Server, Error: 7399) ------------------------------ Method #2 - Using the ODBC Provider... sp_dropserver 'Test', 'droplogins'; EXEC sp_addlinkedserver @server = N'Test', @provider = N'MSDASQL', @srvproduct = N'ODBC', @datasrc = N'Test:DSN' GO EXEC sp_addlinkedsrvlogin @rmtsrvname=N'Test', @useself=N'False',@locallogin=NULL, @rmtuser=N'Admin', @rmtpassword='####' GO I get this error: TITLE: Microsoft SQL Server Management Studio ------------------------------ "The test connection to the linked server failed." ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ Cannot initialize the data source object of OLE DB provider "MSDASQL" for linked server "Test". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Microsoft Access Driver] Cannot open database '(unknown)'. It may not be a database that your application recognizes, or the file may be corrupt.". (Microsoft SQL Server, Error: 7303)

    Read the article

  • How to push changes from Test server to Live server?

    - by anonymous
    As a beginner, I finally noticed the issue with making changes to the live server I've been working on, now that I have a couple users on it, since I bring it down so often. I created an EC2 image of my live server and set up a separate instance on EC2, so now I have 2 EC2 instances, Stage and Production. I set up GitHub and push changes to stage and test my code there, and when it's all done and working, I push it to the production branch, and everything is good. And there is a slight issue here since I name my files config_stage.js and config_production.js and set up .gitignore on each server, and in my code, I would have it read the ENV flags and set up the appropriate configs, is this the correct approach? And my main question is: how do you keep track of non-code changes to the server? For example, I installed HAProxy, Stunnel, Redis, MongoDB and several other things onto the Stage server for testing and now that it's all working and good, how do I deploy them to production? Right now, I'm just keeping track of everything I installed and copying configuration files over, which is very tedious and I'm afraid I may have missed a step somewhere. Is there a better way to port these changes over from my test server to my live server?

    Read the article

  • Nginx config rewriting subdomain name to 1st URI segment

    - by tim peterson
    I'm unable to do the following nginx.conf rewrite: test.mysite.info to: mysite.info/test here's what i've tried: server { server_name test.mysite.info; rewrite ^ https://mysite.info/test/$request_uri; } I know my DNS (Route53 AWS) is correct b/c: test.mysite.info redirects to mysite.info (just not mysite.info/test) I have an Apache server handling mysite.com which using .htaccess I can rewrite test.mysite.com to mysite.com/test. I haven't changed anything else from the default nginx.conf installation so I'm totally confused as to why such a simple thing isn't working. Here is my full nginx.conf file if that is helpful.

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >