Search Results

Search found 27621 results on 1105 pages for 'test plan'.

Page 13/1105 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • How many tasks to plan beforehand [closed]

    - by no__seriously
    As for my daily routine. Every morning when I come to work, I look at the items of my todo-list inbox (noted from the previous day). For each task I think about on which day I should get started and then group them accordingly. Once that's finished, I get started with my actual schedule for the day. Now, this pre-planning for each task (which could be concerning user interface to compiler programming) is mostly pretty sketchy. Serious thoughts about design and implementation comes when the task is about to be tackled. This approach works for me and I can't really complain. But I'm wondering. Since I'm personally most productive during the morning, would it make sense to already go into a deeper level of planning right away for each task? Or is that unproductive and would rather confuse than clarify? I think the latter. How do you handle your task management for each task / project and how far do you go with planning before even getting started with that item?

    Read the article

  • Templated function with two type parameters fails compile when used with an error-checking macro

    - by SirPentor
    Because someone in our group hates exceptions (let's not discuss that here), we tend to use error-checking macros in our C++ projects. I have encountered an odd compilation failure when using a templated function with two type parameters. There are a few errors (below), but I think the root cause is a warning: warning C4002: too many actual parameters for macro 'BOOL_CHECK_BOOL_RETURN' Probably best explained in code: #include "stdafx.h" template<class A, class B> bool DoubleTemplated(B & value) { return true; } template<class A> bool SingleTemplated(A & value) { return true; } bool NotTemplated(bool & value) { return true; } #define BOOL_CHECK_BOOL_RETURN(expr) \ do \ { \ bool __b = (expr); \ if (!__b) \ { \ return false; \ } \ } while (false) \ bool call() { bool thing = true; // BOOL_CHECK_BOOL_RETURN(DoubleTemplated<int, bool>(thing)); // Above line doesn't compile. BOOL_CHECK_BOOL_RETURN((DoubleTemplated<int, bool>(thing))); // Above line compiles just fine. bool temp = DoubleTemplated<int, bool>(thing); // Above line compiles just fine. BOOL_CHECK_BOOL_RETURN(SingleTemplated<bool>(thing)); BOOL_CHECK_BOOL_RETURN(NotTemplated(thing)); return true; } int _tmain(int argc, _TCHAR* argv[]) { call(); return 0; } Here are the errors, when the offending line is not commented out: 1>------ Build started: Project: test, Configuration: Debug Win32 ------ 1>Compiling... 1>test.cpp 1>c:\junk\temp\test\test\test.cpp(38) : warning C4002: too many actual parameters for macro 'BOOL_CHECK_BOOL_RETURN' 1>c:\junk\temp\test\test\test.cpp(38) : error C2143: syntax error : missing ',' before ')' 1>c:\junk\temp\test\test\test.cpp(38) : error C2143: syntax error : missing ';' before '{' 1>c:\junk\temp\test\test\test.cpp(41) : error C2143: syntax error : missing ';' before '{' 1>c:\junk\temp\test\test\test.cpp(48) : error C2143: syntax error : missing ';' before '{' 1>c:\junk\temp\test\test\test.cpp(49) : error C2143: syntax error : missing ';' before '{' 1>c:\junk\temp\test\test\test.cpp(52) : error C2143: syntax error : missing ';' before '}' 1>c:\junk\temp\test\test\test.cpp(54) : error C2065: 'argv' : undeclared identifier 1>c:\junk\temp\test\test\test.cpp(54) : error C2059: syntax error : ']' 1>c:\junk\temp\test\test\test.cpp(55) : error C2143: syntax error : missing ';' before '{' 1>c:\junk\temp\test\test\test.cpp(58) : error C2143: syntax error : missing ';' before '}' 1>c:\junk\temp\test\test\test.cpp(60) : error C2143: syntax error : missing ';' before '}' 1>c:\junk\temp\test\test\test.cpp(60) : fatal error C1004: unexpected end-of-file found 1>Build log was saved at "file://c:\junk\temp\test\test\Debug\BuildLog.htm" 1>test - 12 error(s), 1 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Any ideas? Thanks!

    Read the article

  • Image storage social network (Host plan)

    - by Samir
    I'm wondering what the best way is to host images on a social network site. Let's say that I expect my social network to reach 500.000 users in 2 years time. That would mean that if every user uploaded about 100 images and every image is 1 MB that I will have to need: 500.000 * 100 * 1 MB = 50.000.000 MB which means 50 terabytes. I'm not sure how I can best setup my hosting plan in order to have a solid bases to store my images and eventually store video files as well. Which hosting plan would you recommend me to start with and how can I enhance the plan?

    Read the article

  • asterisk Dial-plan?

    - by Rev
    Hi I want to make a dial plan for asterisk to do this: for incoming-call check the caller-id and if caller id is equal with specific number (for example 666) then hangup that call.(this dial-plan also known as anti ex-girlfriend ) also I wrote this dial-plan for doing this but it doesn't work well.(don't hangup then incoming call from 666 and go to queue macro) [macro-queue] exten => s, 2, Queue(${ARG1}) [default] exten => s, 1, Answer exten => s/666, 2 ,Hangup exten => s, 2, BackGround(welcome) exten => s, 3, Macro(queue,operator)

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • mocha testing for the lazies, single key-press for all possible tests

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. Is there something that does this or anything like this automatically? Like reads all the files and asks me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Software, script or a tool to automate managing which tests to run

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. I have to update the batch file everytime a new file is added or changed. Is there a software, script or a tool, that does this automatically, or makes it easier for me to do so? I basically need it to be aware of and ask me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • VerifyError When Running jUnit Test on Android 1.6

    - by DKnowles
    Here's what I'm trying to run on Android 1.6: package com.healthlogger.test; public class AllTests extends TestSuite { public static Test suite() { return new TestSuiteBuilder(AllTests.class).includeAllPackagesUnderHere().build(); } } and: package com.healthlogger.test; public class RecordTest extends AndroidTestCase { /** * Ensures that the constructor will not take a null data tag. */ @Test(expected=AssertionFailedError.class) public void testNullDataTagInConstructor() { Record r = new Record(null, Calendar.getInstance(), "Data"); fail("Failed to catch null data tag."); } } The main project is HealthLogger. These are run from a separate test project (HealthLoggerTest). HealthLogger and jUnit4 are in HealthLoggerTest's build path. jUnit4 is also in HealthLogger's build path. The class "Record" is located in com.healthlogger. Commenting out the "@Test..." and "Record r..." lines allows this test to run. When they are uncommented, I get a VerifyError exception. I am severely blocked by this; why is it happening? EDIT: some info from logcat after the crash: E/AndroidRuntime( 3723): Uncaught handler: thread main exiting due to uncaught exception E/AndroidRuntime( 3723): java.lang.VerifyError: com.healthlogger.test.RecordTest E/AndroidRuntime( 3723): at java.lang.Class.getDeclaredConstructors(Native Method) E/AndroidRuntime( 3723): at java.lang.Class.getConstructors(Class.java:507) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping$TestCasePredicate.hasValidConstructor(TestGrouping.java:226) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping$TestCasePredicate.apply(TestGrouping.java:215) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping$TestCasePredicate.apply(TestGrouping.java:211) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.select(TestGrouping.java:170) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.selectTestClasses(TestGrouping.java:160) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.testCaseClassesInPackage(TestGrouping.java:154) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.addPackagesRecursive(TestGrouping.java:115) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestSuiteBuilder.includePackages(TestSuiteBuilder.java:103) E/AndroidRuntime( 3723): at android.test.InstrumentationTestRunner.onCreate(InstrumentationTestRunner.java:321) E/AndroidRuntime( 3723): at android.app.ActivityThread.handleBindApplication(ActivityThread.java:3848) E/AndroidRuntime( 3723): at android.app.ActivityThread.access$2800(ActivityThread.java:116) E/AndroidRuntime( 3723): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1831) E/AndroidRuntime( 3723): at android.os.Handler.dispatchMessage(Handler.java:99) E/AndroidRuntime( 3723): at android.os.Looper.loop(Looper.java:123) E/AndroidRuntime( 3723): at android.app.ActivityThread.main(ActivityThread.java:4203) E/AndroidRuntime( 3723): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime( 3723): at java.lang.reflect.Method.invoke(Method.java:521) E/AndroidRuntime( 3723): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) E/AndroidRuntime( 3723): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) E/AndroidRuntime( 3723): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • EBS 12.1.1 Test Starter Kit now Available for Oracle Application Testing Suite

    - by Steven Chan
    We've discussed automated testing tools for the E-Business Suite several times on this blog, since testing is such a key part of everyone's implementation lifecycle.  An important part of our testing arsenal in E-Business Suite Development is the Oracle Application Testing Suite.  The Oracle Automated Testing Suite (OATS) is built on the foundation of the e-TEST suite of products acquired from Empirix  in 2008.  The testing suite is comprised of:   1. Oracle Load Testing for scalability, performance, and load testing   2. Oracle Functional Testing for automated functional and regression testing   3. Oracle Test Manager for test process management, test execution, and defect trackingOracle Application Testing Suite 9.0 has been supported for use with the E-Business Suite since 2009.  I'm very pleased to let you know that our E-Business Suite Release 12.1.1 Test Starter Kit is now available for Oracle Application Testing Suite 9.1.  You can download it here:Oracle Application Testing Suite Downloads

    Read the article

  • How to Test and Deploy Applications Faster

    - by rickramsey
    photo courtesy of mtoleric via Flickr If you want to test and deploy your applications much faster than you could before, take a look at these OTN resources. They won't disappoint. Developer Webinar: How to Test and Deploy Applications Faster - April 10 Our second developer webinar, conducted by engineers Eric Reid and Stephan Schneider, will focus on how the zones and ZFS filesystem in Oracle Solaris 11 can simplify your development environment. This is a cool topic because it will show you how to test and deploy apps in their likely real-world environments much quicker than you could before. April 10 at 9:00 am PT Video Interview: Tips for Developing Faster Applications with Oracle Solaris 11 Express We recorded this a while ago, and it talks about the Express version of Oracle Solaris 11, but most of it applies to the production release. George Drapeau, who manages a group of engineers whose sole mission is to help customers develop better, faster applications for Oracle Solaris, shares some tips and tricks for improving your applications. How ZFS and Zones create the perfect developer sandbox. What's the best way for a developer to use DTrace. How Crossbow's network bandwidth controls can improve an application's performance. To borrow the classic Ed Sullivan accolade, it's a "really good show." "White Paper: What's New For Application Developers Excellent in-depth analysis of exactly how the capabilities of Oracle Solaris 11 help you test and deploy applications faster. Covers the tools in Oracle Solaris Studio and what you can do with each of them, plus source code management, scripting, and shells. How to replicate your development, test, and production environments, and how to make sure your application runs as it should in those different environments. How to migrate Oracle Solaris 10 applications to Oracle Solaris 11. How to find and diagnose faults in your application. And lots, lots more. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Announcing SO-Aware Test Workbench

    - by gsusx
    Yesterday was a big day for Tellago Studios . After a few months hands down working, we announced the release of the SO-Aware Test Workbench tool which brings sophisticated performance testing and test visualization capabilities to theWCF world. This work has been the result of the feedback received by many of our SO-Aware and Tellago customers in terms of how to improve the WCF testing. More importantly, with the SO-Aware Test Workbench we are trying to address what has been one of the biggest challenges...(read more)

    Read the article

  • Programming test for ASP.NET C# developer job - Opinions please!

    - by Indy
    Hi all, We are hiring a .NET C# developer and I have developed a technical test for the candidates to complete. They have an hour and it has two parts, some knowledge based questions covering asp.net, C# and SQL and a small practical test. I'd appreciate feedback on the test, is it sufficient to test the programmers ability? What would you change if anything? Part One. What the are events fired as part of the ASP.NET Page lifecycle. What interesting things can you do at each? How does ViewState work and why is it either useful or bad? What is a common way to create web services in ASP.NET 2.0? What is the GAC? What is boxing? What is a delegate? The C# keyword .int. maps to which .NET type? Explain the difference between a Stored Procedure and a Trigger? What is an OUTER Join? What is @@IDENTITY? Part Two: You are provided with the Northwind Database and the attached DB relationship diagram. Please create a page which provides users with the following functionality. You don’t need to be too concerned with the presentation detail of the page. Select a customer from a list, and see all the orders placed by that customer. For the same customer, find all their orders which are Beverages and the quantity is more than 5. I was aware of setting the right balance of difficulty on this as there is an hour's test. I was able to complete the practical test in under 30 mins using SQLDatasource and the query designer in visual studio and the test questions, I am looking to see how they approach it logically and whether they use the tools available. Many thanks!

    Read the article

  • App Engine Hangout - chat with an App Engine Software Engineer in Test

    App Engine Hangout - chat with an App Engine Software Engineer in Test We'll be chatting with Robert Schuppenies, who is an App Engine Software Engineer in Test. He'll describe a bit about what he does, and talk about/demo some App Engine test frameworks, like the testbed module, code.google.com and code.google.com From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Failed to spawn test

    - by Lost
    Running a simple test in Ubuntu 12.04: sudo lxc-execute -n test /bin/bash -l debug -o outout Got error message: lxc-execute: failed to spawn 'test' cat outout: lxc-execute 1347053658.113 DEBUG lxc_start - sigchild handler set lxc-execute 1347053658.113 INFO lxc_start - 'test' is initialized lxc-execute 1347053658.366 DEBUG lxc_start - Dropping cap_sys_boot and watching utmp lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/' (rootfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/sys' (sysfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/proc' (proc) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/dev' (devtmpfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/dev/pts' (devpts) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/' (ext3) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/fs/fuse/connections' (fusectl) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/kernel/debug' (debugfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/kernel/security' (securityfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/lock' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/shm' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/rpc_pipefs' (rpc_pipefs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/scratch/WAMC-Simulation' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/share' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/proj/WAMC-Simulation' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/users/bhu' (nfs) lxc-execute 1347053658.367 ERROR lxc_start - failed to spawn 'test' Run command: sudo lxc-checkconfig Kernel config /proc/config.gz not found, looking in other places... Found kernel config file /boot/config-2.6.38.7-1.0emulab --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig What's the problem? Thanks a lot

    Read the article

  • How to drastically improve code coverage?

    - by Peter Kofler
    I'm tasked with getting a legacy application under unit test. First some background about the application: It's a 600k LOC Java RCP code base with these major problems massive code duplication no encapsulation, most private data is accessible from outside, some of the business data also made singletons so it's not just changeable from outside but also from everywhere. no business model, business data is stored in Object[] and double[][], so no OO. There is a good regression test suite and an efficient QA team is testing and finding bugs. I know the techniques how to get it under test from classic books, e.g. Michael Feathers, but that's too slow. As there is a working regression test system I'm not afraid to aggressively refactor the system to allow unit tests to be written. How should I start to attack the problem to get some coverage quickly, so I'm able to show progress to management (and in fact to start earning from safety net of JUnit tests)? I do not want to employ tools to generate regression test suites, e.g. AgitarOne, because these tests do not test if something is correct.

    Read the article

  • Mock Objects for Testing - Test Automation Engineer Perspective

    - by user9009
    Hello How often QA engineers are responsible for developing Mock Objects for Unit Testing. So dealing with Mock Objects is just developer job ?. The reason i ask is i'm interested in QA as my career and am learning tools like JUnit , TestNG and couple of frameworks. I just want to know until what level of unit testing is done by developer and from what point QA engineer takes over testing for better test coverage ? Thanks Edit : Based on the answers below am providing more details about what QA i was referring to . I'm interested in more of Test Automation rather than simple QA involved in record and play of script. So Test Automation engineers are responsible for developing frameworks ? or do they have a team of developers dedicated in Framework development ? Yes i was asking about usage of Mock Objects for testing from Test Automation engineer perspective.

    Read the article

  • TDD: Write a separate test for object initialization or relying on other tests exercising it

    - by DXM
    This seems to be the common pattern that's emerging in some of the tests I've worked on lately. We have a class, and quite often this is legacy code whose design can't be easily altered, which has a bunch of member variables. There's some kind of "Initialize" or "Load" function which would put an object into a valid state. Only after it is initialized/loaded, are the members in the proper state so that other methods can be exercised. So when we start writing tests, first test is "TestLoad" and all we put in there is exercising initialization logic. Then we might add one (or few) TestLoadFailureXXX tests and those are definitely valuable. Then we start writing tests to verify other behaviors but all of them require the object to be loaded. So they all start by running exactly the same code as "TestLoad". So my question: Is TestLoad even necessary? Do you take it and let other tests simply exercise the loading? Or leave it so things are more explicit? I know that each unit test function should have no (or as little as possible) overlap with other test functions, but it seems like in cases of loading, this is unavoidable. And whether we like it or not, if something in the loading code breaks, we will end up with a whole test suite of failures. Is there another approach that I might be missing here? Thank you for the responses. It definitely makes sense that you want to see "InitializationTest" and if that fails you know where to start looking. In case it matters, this question is mostly about C++ and we use CppUnit framework. And now, thanks to sleske, I'll be constantly wishing that CppUnit supported test dependencies. Might have to hack something in one of these days :)

    Read the article

  • Test-Driven Development with plain C: manage multiple modules

    - by Angelo
    I am new to test-driven development, but I'm loving it. There is, however, a main problem that prevents me from using it effectively. I work for embedded medical applications, plain C, with safety issues. Suppose you have module A that has a function A_function() that I want to test. This function call a function B_function, implemented in module B. I want to decouple the module so, as James Grenning teaches, I create a Mock module B that implements a mock version of B_function. However the day comes when I have to implement module B with the real version of B_function. Of course the two B_function can not live in the same executable, so I don't know how to have a unique "launcher" to test both modules. James Grenning way out is to replace, in module A, the call to B_function with a function pointer that can have the value of the mock or the real function according to the need. However I work in a team, and I can not justify this decision that would make no sense if it were not for the test, and no one asked me explicitly to use test-driven approach. Maybe the only way out is to generate different a executable for each module. Any smarter solution? Thank you

    Read the article

  • ORACLE UK TECHNOLOGY “TEST FEST”

    - by mseika
    ORACLE UK TECHNOLOGY “TEST FEST” Join us at the UKOUG Conference at the ICC in Birmingham and Take your OPN Implementation Specialist Exam for Free! 3-5 December 2012, ICC Birmingham (UK) Dear Oracle Partner,** As a priority partner, we are sending you advance notice of these exclusive “Technology Test Fest” free examination sessions. Please note that this communication will be sent out to the wider community one week from today, so please register immediately to secure your place! ** We are delighted to offer you the exclusive opportunity to register and attend the Oracle UK “Technology Test Fest” being held as Part of the UKOUG Conference at the ICC in Birmingham in the Drawing Room at the Hyatt Regency hotel adjacent to the ICC venue, from 3rd to 5th December 2012.This is your opportunity to sit your chosen Oracle Technology Specialist Implementation Exam free of charge on this day. Four sessions are being run (10.00AM and 14.00PM), with just 15 places at each session – so register now to avoid disappointment! (Exams take about 1.5 hours to complete.) REGISTER - 3 December Afternoon Session - 2:00pm REGISTER - 4 December Morning Session 10:00am REGISTER - 4 December Afternoon Session - 2:00pm REGISTER - 5 December Morning Session - 10:00am Price: FREE Address:The Drawing Room Hyatt Regency Hotel Birmingham 2 Bridge Street Birmingham BI 2JZ 3 - 5 December 2012 Which Implementation Specialist Exams are available to take?Click here to see the list of exams available for you to sit for free at the Oracle UKOUG “Technology Test Fest”. The links also include the study guide for the particular exam. Please review the Specialization Guide as well. How do I register for the Oracle UK “Technology Test Fest”? Fill out the Pearson Vue profile HERE and complete it with your OPN Company ID. NB: Instructions on how to create/update the profile can be found HERE. Register for one of the 4 sessions using the registration links at the top of this page You will need to bring your own laptop with 'Windows OS' and a form of identification to be able to take any of the exams. Need Help or Advice?For more information about the tests and Get Specialized programme, please contact: [email protected] issues with your profile or any other OPN-related problems, please contact our Oracle Partner Business Centre: [email protected] or call 08705 194 194. We look forward to welcoming you to the Oracle UK “Technology Test Fest” on the 3rd- 5thDecember 2012! Book early to avoid disappointment.

    Read the article

  • Test Driven Development Code Order

    - by Bobby Kostadinov
    I am developing my first project using test driven development. I am using Zend Framework and PHPUnit. Currently my project is at 100% code coverage but I am not sure I understand in what order I am supposed to write my code. Am I supposed to write my test FIRST with what my objects are expected to do or write my objects and then test them? Ive been working on completing a controller/model and then writing at test for it but I am not sure this is what TDD is about? Any advice? For example, I wrote my Auth plugin and my Auth controller and tested that they work properly in my browser, and then I sat down to write the tests for them, which proved that there were some logical errors in the code that did work in the browser.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >