Search Results

Search found 41053 results on 1643 pages for 'database unit testing'.

Page 105/1643 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

    One of the talks I did at QCON London was about a subject that Ive come across fairly recently , when I was building SilverUnit a pure unit test framework for silverlight objects that depend on the silverlight runtime to run. It is the concept of cogs in the machine when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be: your custom control running inside silverlight...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • TDD/Tests too much an overhead/maintenance burden?

    - by MeshMan
    So you've heard it many times from those who do not truly understand the values of testing. Just to start things out, I'm a follower of Agile and Testing... I recently had a discussion about performing TDD on a product re-write where the current team does not practice unit testing on any level, and probably have never heard of the dependency injection technique or test patterns/design etc (we won't even get on to clean code). Now, I am fully responsible for the rewrite of this product and I'm told that attempting it in the fashion of TDD, will merely make it a maintenance nightmare and impossible for the team maintain. Furthermore, as it's a front-end application (not web-based), adding tests is pointless, as the business drive changes (by changes they mean improvements of course), the tests will become out of date, other developers who come on to the project in the future will not maintain them and become more of a burden for them to fix etc. I can understand that TDD in a team that does not currently hold any testing experience doesn't sound good, but my argument in this case is that I can teach my practice to those around me, but further more, I know that TDD makes BETTER software. Even if I was to produce the software using TDD, and throw all the tests away on handing it over to a maintenance team, it surely would be a better approach than not using TDD at all from the start? I've been shot down as I've mentioned doing TDD on most projects for a team that have never heard of it. The thought of "interfaces" and strange looking DI constructors scares them off... Can anyone please help me in what is normally a very short conversation of trying to sell TDD and my approach to people? I usually have a very short window of argument before falling at the knees to the company/team.

    Read the article

  • Telerik Introduces New Developer Tool Designed to Simplify Unit Test Mocking

    JustMock extends Teleriks commitment to providing Visual Studio developer productivity Waltham, MA, April 13, 2010 Telerik, the leading vendor of development tools and user interface components for .NET, today announced the introduction of JustMock, a productivity add-in for Microsoft? Visual Studio 2008 and 2010. JustMock helps developers easily create object mockings in unit tests, saving time and improving the quality of software testing. JustMock is being introduced as a Beta and is scheduled...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How does TDD address interaction between objects?

    - by Gigi
    TDD proponents claim that it results in better design and decoupled objects. I can understand that writing tests first enforces the use of things like dependency injection, resulting in loosely coupled objects. However, TDD is based on unit tests - which test individual methods and not the integration between objects. And yet, TDD expects design to evolve from the tests themselves. So how can TDD possibly result in a better design at the integration (i.e. inter-object) level when the granularity it addresses is finer than that (individual methods)?

    Read the article

  • Returning JsonResult From ASP.NET MVC 2.0 Controller and Unit Testing

    This post will show how to return a simple Json result from an ASP.NET MVC 2.0 web project.  It will show how to test that result inside a unit test and essentially pick apart the Json, just... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What can you do to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue is greatly appreciated. Thanks.

    Read the article

  • What can be done to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue ?

    Read the article

  • What is the best way to find a python google app engine coach?

    - by David Haddad
    i'm a software engineer and have been building Google App Engine apps with Python for about a year. I have a pretty good familiarity with the main concepts: web app framework, modeling, queues, memcache, django templates, etc. Where I think I'm lacking is in methodology. Architecting the app, using git for versioning, designing an writing unit tests. I'm totally convinced to incorporate these practices in my development style, and have started reading up on them. However I've learned that I'm a much faster learner when I have someone experienced to ask questions to and interact with. IRC channels and forums like stack overflow are great. But sometimes you want something more dynamic that produces results faster. So my question is how can a person find an experienced engineer that is familiar with the technologies he uses and that is willing to give them a couple of hours of Skype coaching sessions per week in return for an hourly fee...

    Read the article

  • How can I test a parser for a bespoke XML schema?

    - by Greg B
    I'm parsing a bespoke XML format into an object graph using .NET 4.0. My parser is using the System.XML namespace internally, I'm then interrogating the relevant properties of XmlNodes to create my object graph. I've got a first cut of the parser working on a basic input file and I want to put some unit tests around this before I progress on to more complex input files. Is there a pattern for how to test a parser such as this? When I started looking at this, my first move was to new up and XmlDocument, XmlNamespaceManager and create an XmlElement. But it occurs to me that this is quite lengthy and prone to human error. My parser is quite recursive as you can imagine and this might lead to testing the full system rather than the individual units (methods) of the system. So a second question might be What refactoring might make a recursive parser more testable?

    Read the article

  • Test driven development - convince me!

    - by Casebash
    I know some people are massive proponents of test driven development. I have used unit tests in the past, but only to test operations that can be tested easily or which I believe will quite possibly be correct. Complete or near complete code coverage sounds like it would take a lot of time. What projects do you use test-driven development for? Do you only use it for projects above a certain size? Should I be using it or not? Convince me!

    Read the article

  • migrating product and team from startup race to quality development

    - by thevikas
    This is year 3 and product is selling good enough. Now we need to enforce good software development practices. The goal is to monitor incoming bug reports and reduce them, allow never ending features and get ready for scaling 10x. The phrases "test-driven-development" and "continuous-integration" are not even understood by the team cause they were all in the first 2 year product race. Tech team size is 5. The question is how to sell/convince team and management about TDD/unit testing/coding standards/documentation - with economics. train the team to do more than just feature coding and start writing test units along - which looks like more work, means needs more time! how to plan for creating units for all backlog production code

    Read the article

  • In an online questionnaire, what is a best way to design a database for keeping track of users all attempts?

    - by user1990525
    We have a web app where users can take online exams. Exam admin will create a questionnaire. A questionnaire can have many Questions. Each question is a multiple choice question (MCQ). Lets say an admin creates a questionnaire with 10 questions. Users attempt those questions. Now, unlike real exams users can attempt single questionnaire multiple times. And we have to keep track of his all attempts. e.g. User_id Questionnaire_id question_id answer attempt_date attempt_no 1 1 1 a 1 June 2013 1 1 1 2 b 1 June 2013 1 1 1 1 c 2 June 2013 2 1 1 2 d 2 June 2013 2 Now it can also happen that after user has attempted same questionnare twice, admin can delete a question from same questionnaire, but users attempt history should still have reference to that so that user can see his that question in his attempt history in spite of admin deleting that question. If user now attempts this changed questionnaire he should see only 1 question. User_id Questionnaire_id question_id answer attempt_date attempt_no 1 1 1 a 3 June 2013 3 Also, after this user modified some part of question, users attempt history should show question before modification while any new attempt should show modified question. How do we manage this at the database level? My first gut feeling was that, For deletes, do not do physical delete, just make a question inactive so that history can still keep track of users attempt. For modifications, create versions for questions and each new attempt refres to latest version of each question and history keeping reference to version of question at attempt time.

    Read the article

  • matrix to transform unit cube to space defined by 8 arbitrary points

    - by aadster
    I asked a question relating to similar to this already, but I think this is a clearer objective of what Im trying to achieve.. or whether its possible at all! Im trying to find a transformation (matrix ideally) which would transform the 8 points of a 3d unit cube to 8 arbitrary points in space. The 8 target points have no known structure. e.g: My gut feeling is that a matrix is unable to provide this xform since the cube faces vertices can be concave.. but are there any other methods of transformation? Thanks!

    Read the article

  • Would this data requirement suit a Document -Oriented database?

    - by codecowboy
    I have a requirement to allow users to fill in journal/diary entries per day. I want to provide a handful of known journal templates with x columns to fill in. An example might be a thought diary; a user has to record a thought in one column, describe the situation, rate how they felt etc. The other requirement is that a user should be able to create their own diary templates. They might have a need for a 10 column diary entry per day and might need to rate some aspect out of 50 instead of 10. In an RDBMS, I can see this getting quite complicated. I could have individual tables for my known templates as the fields will be fixed. But for custom diary templates I imagine I would would need a table storing custom_field_types (the diary columns), a table storing entries referencing their field types (custom_entries) and then a third custom_diary table which would store rows matching custom_entries to diaries. Leaving performance / scaling aside, would it be any simpler or make more sense to use a document oriented database like MongoDB to store this data? This is for a web application which might later need an API for mobile devices.

    Read the article

  • Should I pass an object into a constructor, or instantiate in class?

    - by Prisoner
    Consider these two examples: Passing an object to a constructor class ExampleA { private $config; public function __construct($config) { $this->config = $config; } } $config = new Config; $exampleA = new ExampleA($config); Instantiating a class class ExampleB { private $config; public function __construct() { $this->config = new Config; } } $exampleA = new ExampleA(); Which is the correct way to handle adding an object as a property? When should I use one over the other? Does unit testing affect what I should use?

    Read the article

  • Database development/admin - What exactly should I be trying to learn? [closed]

    - by Sauron
    I've been a bit weary about approaching learning databases. I've dabbled into them before, and "DATA" in itself appeals to me a lot. Maintaining/searching/moving, everything about it in the abstract sense I love. (This isn't a career question, this is a learning question.) But as RDBMS's begin to be easier to maintain, and with tools that "anyone" can use to manage data I feared for the database admin jobs. (Yet I see jobs for SQL everywhere!). I know "No-SQL" was a big deal but it kinda has its niche. But that leaves me here... unsure of really "what" to study. What tools should I have in my tool belt? I'm sure RDBMS's will be around in both use and maintaining legacy code. But what else? Obviously data will always be around, but is this a secure field or a dying one? And what should I be concentrating on?

    Read the article

  • What is the best testing pattern for checking that parameters are being used properly?

    - by Joseph
    I'm using Rhino Mocks to try to verify that when I call a certain method, that the method in turn will properly group items and then call another method. Something like this: //Arrange var bucketsOfFun = new BucketGame(); var balls = new List<IBall> { new Ball { Color = Color.Red }, new Ball { Color = Color.Blue }, new Ball { Color = Color.Yellow }, new Ball { Color = Color.Orange }, new Ball { Color = Color.Orange } }; //Act bucketsOfFun.HaveFunWithBucketsAndBalls(balls); //Assert ??? Here is where the trouble begins for me. My method is doing something like this: public void HaveFunWithBucketsAndBalls(IList<IBall> balls) { //group all the balls together according to color var blueBalls = GetBlueBalls(balls); var redBalls = GetRedBalls(balls); // you get the idea HaveFunWithABucketOfBalls(blueBalls); HaveFunWithABucketOfBalls(redBalls); // etc etc with all the different colors } public void HaveFunWithABucketOfBalls(IList<IBall> colorSpecificBalls) { //doing some stuff here that i don't care about //for the test i'm writing right now } What I want to assert is that each time I call HaveFunWithABucketOfBalls that I'm calling it with a group of 1 red ball, then 1 blue ball, then 1 yellow ball, then 2 orange balls. If I can assert that behavior then I can verify that the method is doing what I want it to do, which is grouping the balls properly. Any ideas of what the best testing pattern for this would be?

    Read the article

  • Testing Hibernate DAO, without building the universe around it.

    - by Varun Mehta
    We have an application built using spring/Hibernate/MySQL, now we want to test the DAO layer, but here are a few shortcomings we face. Consider the use case of multiple objects connected to one another, eg: Book has Pages. The Page object cannot exist without the Book as book_id is mandatory FK in Page. For testing a Page I have to create a Book. This simple usecase is easy to manage, but if you start building a Library, till you don't create the whole universe surrounding the Book and Page, you cannot test it! So to test Page; Create Library Create Section Create Genre Create Author Create Book Create Page Now test Page. Is there an easy way to by pass this "universe creation" and just test he page object in isolation. I also want to be able to test HQLs related to Page. eg: SELECT new com.test.BookPage (book.id, page.name) FROM Book book, Page page. JUnit is supposed to run in isolation, so I have to write the whole test case to create the Page. Any tips will be useful.

    Read the article

  • SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database

    - by Pinal Dave
    In the yesterday’s blog post we have seen that it is extremely easy to install the NuoDB database on your local machine. Now that the application is properly set up, let us explore NuoDB a bit more and get you familiar with the how it works and what the important areas of the NuoDB are that you should learn. As we have already installed NuoDB, now we will quickly start with two of the important areas in NuoDB: 1) Admin and 2) Explorer. In this blog post I will explore how the Admin Section of the NuoDB Console works.  In the next blog post we will learn how the Explorer Section works. Let us go to the NuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the following screen: On this screen you can see a big Start QuickStart button. Click on the button and it will bring you to following screen. On this screen you will find very important information about Domain and Database Settings. It is our habit that we do not read what is written on the screen and keep on clicking on continue without reading. While we are familiar with most wizards, we can often miss the very important message on the screen. Please note the information of Domain Settings and Database Settings from the following screen before clicking on Create Database. Domain Settings User: quickstart Password: quickstart Database Settings User: dba Password: goalie Database: test Schema: HOCKEY Once you click on the Create Database button it will immediately start creating sample database. First, it will start a Storage Manager and right after that it will start a Transaction Engine. Once the engine is up, it will Create a Schema and Sample Data. On the success of the creating the sample database it will show the following screen. Now is the time where we can explore the NuoDB Admin or NuoDB Explorer. If you click on Admin, it will first show following login screen. Enter for the username “domain” and for the password “bird”. Alternatively you can enter “quickstart”  twice for username and password.  It works as too. Once you enter into the Admin Section, on the left side you can see information about NuoDB and Admin Console and on the right side you can see the domain overview area. From this Administrative section you can do any of the following tasks: Create a view of the entire domain Add and remove databases Start and stop NuoDB Transaction Engines and Storage Managers Monitor transaction across all the NuoDB databases On the right side of the Admin Section we can see various information about a particular NuoDB domain. You can quickly view various alerts, find out information about the number of host machines that are provisioned for the domain, and see the number of databases and processes that are running in the domain. If you click on the “1 host” link you will be able to see various processes, CPU usage and other information. In the Processes Section you can see that there are two different types of processes. The first process (where you can see the floppy drive icon) represents a running Storage Manager process and the second process a running Transaction Engine process. You can click on the links for the Storage Manager and Transaction Engine to see further statistical details right down to the last byte of the data. There are various charts available for analysis as well. I think the product is quite mature and the user can add different monitor charts to the Admin section. Additionally, the Admin section is the place where you can create and manage new databases. I hope today’s tutorial gives you enough confidence that you can try out NuoDB and checkout various administrative activities with the database. I am personally impressed with their dashboard related to various counters. For more information about how the NuoDB architecture works and what a Storage Manager or Transaction Engine does, check out this short video with NuoDB CTO Seth Proctor:  In the next blog post, we will try out the Explorer section of NuoDB, which allows us to run SQL queries and write SQL code.  Meanwhile, I strongly suggest you download and install NuoDB and get yourself familiar with the product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • SQL SERVER – master Database Log File Grew Too Big

    - by pinaldave
    Couple of the days ago, I received following email and I find this email very interesting and I feel like sharing with all of you. Note: Please read the whole email before providing your suggestions. “Hi Pinal, If you can share these details on your blog, it will help many. We understand the value of the master database and we take its regular back up (everyday midnight). Yesterday we noticed that our master database log file has grown very large. This is very first time that we have encountered such an issue. The master database is in simple recovery mode; so we assumed that it will never grow big; however, we now have a big log file. We ran the following command USE [master] GO DBCC SHRINKFILE (N'mastlog' , 0, TRUNCATEONLY) GO We know this command will break the chains of LSN but as per our understanding; it should not matter as we are in simple recovery model.     After running this, the log file becomes very small. Just to be cautious, we took full backup of the master database right away. We totally understand that this is not the normal practice; so if you are going to tell us the same, we are aware of it. However, here is the question for you? What operation in master database would have caused our log file to grow too large? Thanks, [name and company name removed as per request]“ Here was my response to them: “Hi [name removed], It is great that you are aware of all the right steps and method. Taking full backup when you are not sure is always a good practice. Regarding your question what could have caused your master database log to grow larger, let me try to guess what could have happened. Do you have any user table in the master database? If yes, this is not recommended and also NOT a good practice. If have user tables in master database and you are doing any long operation (may be lots of insert, update, delete or rebuilding them), then it can cause this situation. You have made me curious about your scenario; do revert back. Kind Regards, Pinal” Within few minutes I received reply: “That was it Pinal. We had one of the maintenance task log tables created in the master table, which had many long transactions during the night. We moved it to newly created database named ‘maintenance’, and we will keep you updated.” I was very glad to receive the email. I do not suggest that any user table should be created in the master database. It should be left alone from user objects. Now here is the question for you – can you think of any other reason for master log file growth? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Migrating SQL Server Compact Edition (SQL CE) database to SQL Server using Web Matrix

    - by Harish Ranganathan
    One of the things that is keeping us busy is the Web Camps we are delivering across 5 cities.  If you are a reader of this blog, and also attended one of these web camps, there is a good chance that you have seen me since I was there in all the places, so far.  The topics that we cover include Visual Studio 2010 SP1, SQL CE, ASP.NET MVC & HTML5.  Whenever I talk about SQL CE, the immediate response is that, people are wow that Microsoft has shipped a FREE compact edition database, which is an embedded database that can be x-copy deployed.  If you think, well didn’t Microsoft ship SQL Express which is FREE?  The difference is that, SQL Express runs as a service in the machine (if you open SQL Configuration Manager, you can notice that SQL Express is running as a service along with your SQL Server Engine (if you have installed ).  This makes it that, even if you are willing to use SQL Express when you deploy your application, it needs to be installed on the production machine (hosting provider) and it needs to run as a service.  Many hosters don’t allow such services to run on their space. SQL CE comes as a x-Copy deploy-able database with just a few DLLs required to run it on the machine and they don’t even need to be installed in GAC on the production machine.  In fact, if you have Visual Studio 2010 SP1 installed, you can use the “Add Deployable Dependencies” option in Project-Properties and it would detect that SQL CE is something you would probably want to add as a deploy-able dependency for your project.  With that, it bundles the required DLLs as a part of the “_bin_deployableAssemblies” folder.  So your project can be x-Copy deployed and just works fine. However, SQL CE has the limit of 4GB storage space.  Real world applications often require more than just 4GB of data storage and it often turns out that people would like to use SQL CE for development/ramp up stages but would like to migrate to full fledged SQL Server after a while.  So, its only natural that the question arises “How do I move my SQL CE database to SQL Server”  And honestly, it doesn’t come across as a straight forward support.  I was talking to Ambrish Mishra (PM in SQL CE Team, Hyderabad) since I got this question in almost all the places where we talked about SQL CE.   He was kind enough to demonstrate how this can be accomplished using Web Matrix.  Open Web Matrix (Web Matrix can be installed for free from www.microsoft.com/web) and click on “Site from Template” Click on the “Bakery” template (since by default it uses a SQL CE database and has all the required sample data) and click “Ok”. In the project, you can navigate to the Database tab and will be able to find that the Bakery site uses a SQL CE database “bakery.sdf” Select the “bakery.sdf” and you will be able to see the “Migrate” button on the top right Once you click on the “Migrate” button, you will notice that the popup wizard opens up and by default is configured for SQL Express.  You can edit the same to point to your local SQL Server instance, or a remote server. Upon filling in the Server Name, Username and Password, when you click “Ok”, couple of things happen.  1. The database is migrated to SQL Server (local or remote – subject to permissions on remote server).   You can open up SQL Server Management Studio and connect to the server to verify that the “bakery” database exists under “Databases” node. 2. You can also notice that in Web Matrix, when you navigate to the “Files” tab and open up the web.config file, connection string now points to the SQL Server instance (yes, the Migrate button was smart enough to make this change too ) And there it is, your SQL Server Compact Edition database, now migrated to SQL Server!! In a future post, I would explain the steps involved when using Visual Studio. Cheers !!!

    Read the article

  • Attend Onsite Product Usability Testing or Tour Oracle HQ Usability Labs during Oracle OpenWorld 2014

    - by gaamoth-Oracle
     By Gozel Aamoth, Oracle Applications User Experience Oracle OpenWorld  is the world’s largest business and technology event, featuring thousands of sessions, including keynotes, technical sessions, demos, and hands-on labs. Hundreds of exhibitors will be sharing what they’re bringing to Oracle technology at this year’s conference, held in downtown  San Francisco from Sept. 29-Oct. 2. If you are an Oracle customer or partner planning to attend this  annual event, there are several ways to  meet face-to-face with members of the Oracle Applications  User Experience (UX) team. We’d like  to invite you to sign up for a usability feedback session, or  hop on one of our special chartered buses  to tour Oracle HQ’s usability labs. Here’s more  information about these exclusive events. Onsite product usability testing: Give us your feedback! Product usability testing is in progress at Oracle OpenWorld 2013. The Oracle Applications User Experience team will host an onsite usability lab, where Oracle customers and partners can participate in a usability feedback session, at Oracle OpenWorld 2014. Usability experts, product managers, and user interface designers have teamed up to provide Oracle customers and partners with the opportunity to contribute to and influence application design and direction while test-driving Oracle’s next-generation applications. Your feedback will affect the existing and future usability of Oracle applications, and help us develop applications that are intuitive and easy to use. What will we test? Participants will get a preview of proposed Oracle product designs for Oracle Human Capital Management Cloud and Oracle Sales Cloud, Oracle Fusion applications for Procurement and Supply Chain, Oracle E-Business Suite, PeopleSoft applications, Social Relationship Management, BI applications, Fusion Middleware, and more. Who can participate*? Regardless of your current job title, we have a session that might interest you. These one-on-one feedback sessions are popular, and space is very limited, so contact us  today to learn more. Dates: Sept. 29 – Oct. 1, 2014  Location: InterContinental Hotel, San Francisco, CA  Time: Advance sign-up is required for this event. RSVP now. If you have questions about this event, please contact Angela Johnston.  Take a tour of the Oracle HQ Usability Lab during OpenWorld 2014Members of Applications UX team lead Oracle OpenWorld lab tour attendeesto the usability labs at Oracle headquarters in Redwood City, CA. The Applications User Experience team will be offering a limited number of usability lab tours  at Oracle Headquarters in Redwood City, Calif., during Oracle OpenWorld 2014. Come take a look behind the scenes of Oracle’s research and development work on Thursday, Oct. 2, or Friday, Oct. 3. Receive an exclusive look into how Oracle tests applications designs, and see the direction that Oracle’s enterprise applications are heading, including demos of designs for devices such as the tablet and smartphone. Round-trip transportation will be provided. Pick-up and drop-off is at the InterContinental Hotel in San Francisco, next to Moscone West. Spots are limited, so sign up today! How to reserve your spot To RSVP, sign up here. For additional questions, send an e-mail to Jeannette Chadwick. To learn more about our team’s presence at Oracle OpenWorld this year, please visit our website, UsableApps. *Participation requires that your company or organization has a Customer Participation Confidentiality Agreement (CPCA) on file. If your company or organization does not have a CPCA on file, we will start this process.

    Read the article

  • Junit (3.8.1) testing that an exception is thrown (works in unit test, fails when added to a testSui

    - by Mike Cargal
    I'm trying to test that I'm throwing an exception when appropriate. In my test class I have a method similar to the following: public void testParseException() { try { ClientEntitySingleton.getInstance(); fail("should have thrown exception."); } catch (RuntimeException re) { assertEquals( "<exception message>", re.getMessage()); } } This works fine (green bar) whenever I run that single unitTest class. However, when I add that test to a testSuite, I get a red bar Unit test failure reported on the exception. One more thing... it works in the testSuite, if it's the first test in the suite. Actually, I'm doing two of these tests and just figured out that if I make them the first two tests in the suite, all is good, but I get this failure if a "regular" test precedes it. So I have a work-around, but no real answer. Any ideas? Heres'a stack trace of the "failure" java.lang.RuntimeException: ProcEntity client dn="Xxxxxx/Xxxx/XXX" is defined multiple times. at com.someco.someprod.clientEntityManagement.ClientEntitySingleton.addClientEntity(ClientEntitySingleton.java:247) at com.someco.someprod.clientEntityManagement.ClientEntitySingleton.startElement(ClientEntitySingleton.java:264) at org.apache.xerces.parsers.AbstractSAXParser.startElement(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanStartElement(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at com.someco.someprod.clientEntityManagement.ClientEntitySingleton.parse(ClientEntitySingleton.java:216) at com.someco.someprod.clientEntityManagement.ClientEntitySingleton.reload(ClientEntitySingleton.java:303) at com.someco.someprod.clientEntityManagement.ClientEntitySingleton.setInputSourceProvider(ClientEntitySingleton.java:88) at com.someco.someprod.clientEntityManagement.test.TestClientBase.setUp(TestClientBase.java:17) at com.someco.someprod.clientEntityManagement.test.TestClientEntityDup.setUp(TestClientEntityDup.java:8) at junit.framework.TestCase.runBare(TestCase.java:125) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:118) at junit.framework.TestSuite.runTest(TestSuite.java:208) at junit.framework.TestSuite.run(TestSuite.java:203) at junit.framework.TestSuite.runTest(TestSuite.java:208) at junit.framework.TestSuite.run(TestSuite.java:203) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:128) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)

    Read the article

  • Blueprints for Oracle NoSQL Database

    - by dan.mcclary
    I think that some of the most interesting analytic problems are graph problems.  I'm always interested in new ways to store and access graphs.  As such, I really like the work being done by Tinkerpop to create Open Source Software to make property graphs more accessible over a wide variety of datastores.  Since key-value stores like Oracle NoSQL Database are well-suited to storing property graphs, I decided to extend the Blueprints API to work with it.  Below I'll discuss some of the implementation details, but you can check out the finished product here: http://github.com/dwmclary/blueprints-oracle-nosqldb.  What's in a Property Graph?  In the most general sense, a graph is just a collection of vertices and edges.  Vertices and edges can have properties: weights, names, or any number of other traits.  In an undirected graph, edges connect vertices without direction.  A directed graph specifies that all edges have a head and a tail --- a direction.  A multi-graph allows multiple edges to connect two vertices.  A "property graph" encompasses all of these traits. Key-Value Stores for Property Graphs Key-Value stores like Oracle NoSQL Database tend to be ideal for implementing property graphs.  First, if any vertex or edge can have any number of traits, we can treat it as a hash map.  For example: Vertex["name"] = "Mary" Vertex["age"] = 28 Vertex["ID"] = 12345  and so on.  This is a natural key-value relationship: the key "name" maps to the value "Mary."  Moreover if we maintain two hash maps, one for vertex objects and one for edge objects, we've essentially captured the graph.  As such, any scalable key-value store is fertile ground for planting graphs. Oracle NoSQL Database as a Scalable Graph Database While Oracle NoSQL Database offers useful features like tunable consistency, what lends it to storing property graphs is the storage guarantees around its key structure.  Keys in Oracle NoSQL Database are divided into two parts: a major key and a minor key.  The storage guarantee is simple.  Major keys will be distributed across storage nodes, which could encompass a large number of servers.  However, all minor keys which are children of a given major key are guaranteed to be stored on the same storage node.  For example, the vertices: /Personnel/Vertex/1  and /Personnel/Vertex/2 May be stored on different servers, but /Personnel/Vertex/1-/name and  /Personnel/Vertex/1-/age will always be on the same server.  This means that we can structure our graph database such that retrieving all the properties for a vertex or edge requires I/O from only a single storage node.  Moreover, Oracle NoSQL Database provides a storeIterator which allows us to store a huge number of vertices and edges in a scalable fashion.  By storing the vertices and edges as major keys, we guarantee that they are distributed evenly across all storage nodes.  At the same time we can use a partial major key to iterate over all the vertices or edges (e.g. we search over /Personnel/Vertex to iterate over all vertices). Fork It! The Blueprints API and Oracle NoSQL Database present a great way to get started using a scalable key-value database to store and access graph data.  However, a graph store isn't useful without a good graph to work on.  I encourage you to fork or pull the repository, store some data, and try using Gremlin or any other language to explore.

    Read the article

  • UAT Testing for SOA 10G Clusters

    - by [email protected]
    A lot of customers ask how to verify their SOA clusters and make them production ready. Here is a list that I recommend using for 10G SOA Clusters. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} Test cases for each component - Oracle Application Server 10G General Application Server test cases This section is going to cover very General test cases to make sure that the Application Server cluster has been set up correctly and if you can start and stop all the components in the server via opmnct and AS Console. Test Case 1 Check if you can see AS instances in the console Implementation 1. Log on to the AS Console --> check to see if you can see all the nodes in your AS cluster. You should be able to see all the Oracle AS instances that are part of the cluster. This means that the OPMN clustering worked and the AS instances successfully joined the AS cluster. Result You should be able to see if all the instances in the AS cluster are listed in the EM console. If the instances are not listed here are the files to check to see if OPMN joined the cluster properly: $ORACLE_HOME\opmn\logs{*}opmn.log*$ORACLE_HOME\opmn\logs{*}opmn.dbg* If OPMN did not join the cluster properly, please check the opmn.xml file to make sure the discovery multicast address and port are correct (see this link  for opmn documentation). Restart the whole instance using opmnctl stopall followed by opmnctl startall. Log on to AS console to see if instance is listed as part of the cluster. Test Case 2 Check to see if you can start/stop each component Implementation Check each OC4J component on each AS instanceStart each and every component through the AS console to see if they will start and stop.Do that for each and every instance. Result Each component should start and stop through the AS console. You can also verify if the component started by checking opmnctl status by logging onto each box associated with the cluster Test Case 3 Add/modify a datasource entry through AS console on a remote AS instance (not on the instance where EM is physically running) Implementation Pick an OC4J instanceCreate a new data-source through the AS consoleModify an existing data-source or connection pool (optional) Result Open $ORACLE_HOME\j2ee\<oc4j_name>\config\data-sources.xml to see if the new (and or the modified) connection details and data-source exist. If they do then the AS console has successfully updated a remote file and MBeans are communicating correctly. Test Case 4 Start and stop AS instances using opmnctl @cluster command Implementation 1. Go to $ORACLE_HOME\opmn\bin and use the opmnctl @cluster to start and stop the AS instances Result Use opmnctl @cluster status to check for start and stop statuses.  HTTP server test cases This section will deal with use cases to test HTTP server failover scenarios. In these examples the HTTP server will be talking to the BPEL console (or any other web application that the client wants), so the URL will be _http://hostname:port\BPELConsole Test Case 1  Shut down one of the HTTP servers while accessing the BPEL console and see the requested routed to the second HTTP server in the cluster Implementation Access the BPELConsoleCheck $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like this 1xx.2x.2xx.xxx [24/Mar/2009:16:04:38 -0500] "GET /BPELConsole=System HTTP/1.1" 200 15 After you have figured out which HTTP server this is running on, shut down this HTTP server by using opmnctl stopproc --> this is a graceful shutdown.Access the BPELConsole again (please note that you should have a LoadBalancer in front of the HTTP server and configured the Apache Virtual Host, see EDG for steps)Check $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like above Result Even though you are shutting down the HTTP server the request is routed to the surviving HTTP server, which is then able to route the request to the BPEL Console and you are able to access the console. By checking the access log file you can confirm that the request is being picked up by the surviving node. Test Case 2 Repeat the same test as above but instead of calling opmnctl stopproc, pull the network cord of one of the HTTP servers, so that the LBR routes the request to the surviving HTTP node --> this is simulating a network failure. Test Case 3 In test case 1 we have simulated a graceful shutdown, in this case we will simulate an Apache crash Implementation Use opmnctl status -l to get the PID of the HTTP server that you would like forcefully bring downOn Linux use kill -9 <PID> to kill the HTTP serverAccess the BPEL console Result As you shut down the HTTP server, OPMN will restart the HTTP server. The restart may be so quick that the LBR may still route the request to the same server. One way to check if the HTTP server restared is to check the new PID and the timestamp in the access log for the BPEL console. BPEL test cases This section is going to cover scenarios dealing with BPEL clustering using jGroups, BPEL deployment and testing related to BPEL failover. Test Case 1 Verify that jGroups has initialized correctly. There is no real testing in this use case just a visual verification by looking at log files that jGroups has initialized correctly. Check the opmn log for the BPEL container for all nodes at $ORACLE_HOME/opmn/logs/<group name><container name><group name>~1.log. This logfile will contain jGroups related information during startup and steady-state operation. Soon after startup you should find log entries for UDP or TCP.Example jGroups Log Entries for UDPApr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets ·         INFO: sockets will use interface 144.25.142.172·          ·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets·          ·         INFO: socket information:·          ·         local_addr=144.25.142.172:1127, mcast_addr=228.8.15.75:45788, bind_addr=/144.25.142.172, ttl=32·         sock: bound to 144.25.142.172:1127, receive buffer size=64000, send buffer size=32000·         mcast_recv_sock: bound to 144.25.142.172:45788, send buffer size=32000, receive buffer size=64000·         mcast_send_sock: bound to 144.25.142.172:1128, send buffer size=32000, receive buffer size=64000·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·          ·         GMS: address is 144.25.142.172:1127·          ------------------------------------------------------- Example jGroups Log Entries for TCPApr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.172:7900·          ·         Apr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·         GMS: address is 144.25.142.172:7900------------------------------------------------------- In the log below the "socket created on" indicates that the TCP socket is established on the own node at that IP address and port the "created socket to" shows that the second node has connected to the first node, matching the logfile above with the IP address and port.Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.173:7901·          ·         Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         ------------------------------------------------------·         GMS: address is 144.25.142.173:7901·         -------------------------------------------------------·         Apr 3, 2008 6:25:41 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable getConnectionINFO: created socket to 144.25.142.172:7900  Result By reviewing the log files, you can confirm if BPEL clustering at the jGroups level is working and that the jGroup channel is communicating. Test Case 2  Test connectivity between BPEL Nodes Implementation Test connections between different cluster nodes using ping, telnet, and traceroute. The presence of firewalls and number of hops between cluster nodes can affect performance as they have a tendency to take down connections after some time or simply block them.Also reference Metalink Note 413783.1: "How to Test Whether Multicast is Enabled on the Network." Result Using the above tools you can confirm if Multicast is working  and whether BPEL nodes are commnunicating. Test Case3 Test deployment of BPEL suitcase to one BPEL node.  Implementation Deploy a HelloWorrld BPEL suitcase (or any other client specific BPEL suitcase) to only one BPEL instance using ant, or JDeveloper or via the BPEL consoleLog on to the second BPEL console to check if the BPEL suitcase has been deployed Result If jGroups has been configured and communicating correctly, BPEL clustering will allow you to deploy a suitcase to a single node, and jGroups will notify the second instance of the deployment. The second BPEL instance will go to the DB and pick up the new deployment after receiving notification. The result is that the new deployment will be "deployed" to each node, by only deploying to a single BPEL instance in the BPEL cluster. Test Case 4  Test to see if the BPEL server failsover and if all asynch processes are picked up by the secondary BPEL instance Implementation Deploy a 2 Asynch process: A ParentAsynch Process which calls a ChildAsynchProcess with a variable telling it how many times to loop or how many seconds to sleepA ChildAsynchProcess that loops or sleeps or has an onAlarmMake sure that the processes are deployed to both serversShut down one BPEL serverOn the active BPEL server call ParentAsynch a few times (use the load generation page)When you have enough ParentAsynch instances shut down this BPEL instance and start the other one. Please wait till this BPEL instance shuts down fully before starting up the second one.Log on to the BPEL console and see that the instance were picked up by the second BPEL node and completed Result The BPEL instance will failover to the secondary node and complete the flow ESB test cases This section covers the use cases involved with testing an ESB cluster. For this section please Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} follow Metalink Note 470267.1 which covers the basic tests to verify your ESB cluster.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >