Search Results

Search found 25304 results on 1013 pages for 'test environments'.

Page 46/1013 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Visual studio 2008 unit test keeps failing

    - by Gerbrand
    I've create a method that calculates the harmonic mean based on a list of doubles. But when I'm running the test it keeps failing even thou the output result are the same. My harmonic mean method: public static double GetHarmonicMean(List<double> parameters) { var cumReciprocal = 0.0d; var countN = parameters.Count; foreach( var param in parameters) { cumReciprocal += 1.0d/param; } return 1.0d/(cumReciprocal/countN); } My test method: [TestMethod()] public void GetHarmonicMeanTest() { var parameters = new List<double> { 1.5d, 2.3d, 2.9d, 1.9d, 5.6d }; const double expected = 2.32432293165495; var actual = OwnFunctions.GetHarmonicMean(parameters); Assert.AreEqual(expected, actual); } After running the test the following message is showing: Assert.AreEqual failed. Expected:<2.32432293165495. Actual:<2.32432293165495. For me that are both the same values. Can somebody explain this? Or am I doing something wrong?

    Read the article

  • Android AsyncTask testing problem with Android Test Framework

    - by Vlad
    I have a very simple AsyncTask implementation example and have problem to test it using Android JUnit framework. It works just fine when I instantiate and execute it in normal application. However when it's executed from any of Android Testing framework classes (i.e. AndroidTestCase, ActivityUnitTestCase, ActivityInstrumentationTestCase2 etc) it behaves sarngely: - It executes doInBackground() method correctly - However it doesn't invokes any of its notification methods (onPostExecute(), onProgressUpdate(), etc) -- just silently ignores them whitout showing any errors. This is very simple AsyncTask example package kroz.andcookbook.threads.asynctask; import android.os.AsyncTask; import android.util.Log; import android.widget.ProgressBar; import android.widget.Toast; public class AsyncTaskDemo extends AsyncTask<Integer, Integer, String> { AsyncTaskDemoActivity _parentActivity; int _counter; int _maxCount; public AsyncTaskDemo(AsyncTaskDemoActivity asyncTaskDemoActivity) { _parentActivity = asyncTaskDemoActivity; } @Override protected void onPreExecute() { super.onPreExecute(); _parentActivity._progressBar.setVisibility(ProgressBar.VISIBLE); _parentActivity._progressBar.invalidate(); } @Override protected String doInBackground(Integer... params) { _maxCount = params[0]; for (_counter = 0; _counter <= _maxCount; _counter++) { try { Thread.sleep(1000); publishProgress(_counter); } catch (InterruptedException e) { // Ignore } } } @Override protected void onProgressUpdate(Integer... values) { super.onProgressUpdate(values); int progress = values[0]; String progressStr = "Counting " + progress + " out of " + _maxCount; _parentActivity._textView.setText(progressStr); _parentActivity._textView.invalidate(); } @Override protected void onPostExecute(String result) { super.onPostExecute(result); _parentActivity._progressBar.setVisibility(ProgressBar.INVISIBLE); _parentActivity._progressBar.invalidate(); } @Override protected void onCancelled() { super.onCancelled(); _parentActivity._textView.setText("Request to cancel AsyncTask"); } } This is a test case. Here AsyncTaskDemoActivity is a very simple Activity providing UI for testing AsyncTask in mode: package kroz.andcookbook.test.threads.asynctask; import java.util.concurrent.ExecutionException; import kroz.andcookbook.R; import kroz.andcookbook.threads.asynctask.AsyncTaskDemo; import kroz.andcookbook.threads.asynctask.AsyncTaskDemoActivity; import android.content.Intent; import android.test.ActivityUnitTestCase; import android.widget.Button; public class AsyncTaskDemoTest2 extends ActivityUnitTestCase<AsyncTaskDemoActivity> { AsyncTaskDemo _atask; private Intent _startIntent; public AsyncTaskDemoTest2() { super(AsyncTaskDemoActivity.class); } protected void setUp() throws Exception { super.setUp(); _startIntent = new Intent(Intent.ACTION_MAIN); } protected void tearDown() throws Exception { super.tearDown(); } public final void testExecute() { startActivity(_startIntent, null, null); Button btnStart = (Button) getActivity().findViewById(R.id.Button01); btnStart.performClick(); assertNotNull(getActivity()); } } All this code is working just fine, except the fact that AsynTask doesn't invoke it's notification methods when executed by whithin Android Testing Framework. Any ideas?

    Read the article

  • How to mock/stub calls to message taglib in Grails controller

    - by Dave
    I've got a Grails controller which relies on the message taglib to resolve an i18n message: class TokenController { def passwordReset = { def token = DatedToken.findById(params.id); if (!isValidToken(token, params)) { flash.message = message(code: "forgotPassword.reset.invalidToken") redirect controller: 'forgotPassword', action: 'index' return } render view:'/forgotPassword/reset', model: [token: token.token] } } I've written a unit test for the controller: class TokenControllerTests extends ControllerUnitTestCase { void testPasswordResetInvalidTokenRedirect() { controller.passwordReset() assert... } } Since the message taglib is called in the controller I get a MissingMethodException: groovy.lang.MissingMethodException: No signature of method: TokenController.message() is applicable for argument types: (java.util.LinkedHashMap) values: [[code:forgotPassword.reset.invalidToken]] Does anyone know the best way to get around this issue in a unit test? Ideally I would like to perform assertions on the message but right now I'd be happy if the test just ran! Thanks

    Read the article

  • How do I run JUnit tests from inside my java application?

    - by corgrath
    Is it possible to run JUnit tests from inside my java application? Are there test frameworks I can use (such as JUnit.jar?), or am I force to find the test files, invoke the methods and track the exceptions myself? The reason why I am asking is my application requires a lot of work to start launch (lots of dependencies and configurations, etc) and using an external testing tool (like JUnit Ant task) would require a lot of work to set up. It is easier to start the application and then inside the application run my tests. Is there an easy test framework that runs tests and output results from inside a java application or am I forced to write my own framework?

    Read the article

  • Django: text fixture fails to load

    - by Esteban Feldman
    Hi all, Did a dumpdata of my project, then in my new test I added it to fixtures. from django.test import TestCase class TestGoal(TestCase): fixtures = ['test_data.json'] def test_goal(self): """ Tests that 1 + 1 always equals 2. """ self.failUnlessEqual(1 + 1, 2) When running the test I get: Problem installing fixture 'XXX/fixtures/test_data.json': DoesNotExist: XXX matching query does not exist. But manually doing loaddata works fine does not when the db is empty. I do a dropdb, createdb a simple syncdb the try loaddata and it fails, same error. Any clue? Python version 2.6.5, Django 1.1.1

    Read the article

  • How to make pytest display a custom string representation for fixture parameters?

    - by Björn Pollex
    When using builtin types as fixture parameters, pytest prints out the value of the parameters in the test report. For example: @fixture(params=['hello', 'world'] def data(request): return request.param def test_something(data): pass Running this with py.test --verbose will print something like: test_example.py:7: test_something[hello] PASSED test_example.py:7: test_something[world] PASSED Note that the value of the parameter is printed in square brackets after the test name. Now, when using an object of a user-defined class as parameter, like so: class Param(object): def __init__(self, text): self.text = text @fixture(params=[Param('hello'), Param('world')] def data(request): return request.param def test_something(data): pass pytest will simply enumerate the number of values (p0, p1, etc.): test_example.py:7: test_something[p0] PASSED test_example.py:7: test_something[p1] PASSED This behavior does not change even when the user-defined class provides custom __str__ and __repr__ implementations. Is there any way to make pytest display something more useful than just p0 here? I am using pytest 2.5.2 on Python 2.7.6 on Windows 7.

    Read the article

  • Maven/TestNG reports "Failures: 0" but then "There are test failures.", what's wrong?

    - by JohnS
    I'm using Maven 2.2.1 r801777, Surefire 2.7.1, TestNG 5.14.6, Java 1.6.0_11 on Win XP. I have only one test class with one empty test method and in my pom I have just added TestNG dependency. When I execute mvn test it prints out: ------------------------------------------------------- T E S T S ------------------------------------------------------- Running TestSuite Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.301 sec Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] There are test failures. Please refer to [...]\target\surefire-reports for the individual test results. There is no error in test reports and with -e switch: [INFO] Trace org.apache.maven.BuildFailureException: There are test failures. Please refer to [...]\target\surefire-reports for the individual test results. at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:715) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:556) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:535) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoFailureException: There are test failures. Please refer to [...]\target\surefire-reports for the individual test results. at org.apache.maven.plugin.surefire.SurefirePlugin.execute(SurefirePlugin.java:575) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more Any idea? EDIT My pom: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.sample</groupId> <artifactId>sample</artifactId> <name>sample</name> <packaging>jar</packaging> <version>0.0.1-SNAPSHOT</version> <description /> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> <encoding>UTF-8</encoding> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>org.testng</groupId> <artifactId>testng</artifactId> <version>5.14.6</version> <scope>test</scope> </dependency> </dependencies> </project> The only class that I have: import org.testng.Assert; import org.testng.annotations.Test; @Test public class MyTest { @Test public void test() { Assert.assertEquals("a", "a"); } }

    Read the article

  • Why is there a /etc/init.d/mysql file on this Slackware machine? How could it have gotten there?

    - by jasonspiro
    A client of my IT-consulting service owns a web-development shop. He's been having problems with a Slackware 12.0 server running MySQL 5.0.67. The machine was set up by the client's sysadmin, who left on bad terms. My client no longer employs a sysadmin. As far as I can tell, the only copy of MySQL that's installed is the one described in /var/log/packages/mysql-5.0.67-i486-1: PACKAGE NAME: mysql-5.0.67-i486-1 COMPRESSED PACKAGE SIZE: 16828 K UNCOMPRESSED PACKAGE SIZE: 33840 K PACKAGE LOCATION: /var/slapt-get/archives/./slackware/ap/mysql-5.0.67-i486-1.tgz PACKAGE DESCRIPTION: mysql: mysql (SQL-based relational database server) mysql: mysql: MySQL is a fast, multi-threaded, multi-user, and robust SQL mysql: (Structured Query Language) database server. It comes with a nice API mysql: which makes it easy to integrate into other applications. mysql: mysql: The home page for MySQL is http://www.mysql.com/ mysql: mysql: mysql: mysql: FILE LIST: ./ var/ var/lib/ var/lib/mysql/ var/run/ var/run/mysql/ install/ install/doinst.sh install/slack-desc usr/ usr/include/ usr/include/mysql/ usr/include/mysql/my_alloc.h usr/include/mysql/sql_common.h usr/include/mysql/my_dbug.h usr/include/mysql/errmsg.h usr/include/mysql/my_pthread.h usr/include/mysql/my_list.h usr/include/mysql/mysql.h usr/include/mysql/sslopt-vars.h usr/include/mysql/my_config.h usr/include/mysql/mysql_com.h usr/include/mysql/m_string.h usr/include/mysql/sslopt-case.h usr/include/mysql/my_xml.h usr/include/mysql/sql_state.h usr/include/mysql/my_global.h usr/include/mysql/my_sys.h usr/include/mysql/mysqld_ername.h usr/include/mysql/mysqld_error.h usr/include/mysql/sslopt-longopts.h usr/include/mysql/keycache.h usr/include/mysql/my_net.h usr/include/mysql/mysql_version.h usr/include/mysql/my_no_pthread.h usr/include/mysql/decimal.h usr/include/mysql/readline.h usr/include/mysql/my_attribute.h usr/include/mysql/typelib.h usr/include/mysql/my_dir.h usr/include/mysql/raid.h usr/include/mysql/m_ctype.h usr/include/mysql/mysql_embed.h usr/include/mysql/mysql_time.h usr/include/mysql/my_getopt.h usr/lib/ usr/lib/mysql/ usr/lib/mysql/libmysqlclient_r.so.15.0.0 usr/lib/mysql/libmysqlclient_r.la usr/lib/mysql/libmyisammrg.a usr/lib/mysql/libmystrings.a usr/lib/mysql/libmyisam.a usr/lib/mysql/libmysqlclient.so.15.0.0 usr/lib/mysql/libmysqlclient_r.a usr/lib/mysql/libmysqlclient.a usr/lib/mysql/libheap.a usr/lib/mysql/libvio.a usr/lib/mysql/libmysqlclient.la usr/lib/mysql/libmysys.a usr/lib/mysql/libdbug.a usr/bin/ usr/bin/comp_err usr/bin/my_print_defaults usr/bin/resolve_stack_dump usr/bin/msql2mysql usr/bin/mysqltestmanager-pwgen usr/bin/myisampack usr/bin/replace usr/bin/mysqld_multi usr/bin/mysqlaccess usr/bin/mysql_install_db usr/bin/innochecksum usr/bin/myisam_ftdump usr/bin/mysqlcheck usr/bin/mysqltest usr/bin/mysql_upgrade_shell usr/bin/mysql_secure_installation usr/bin/mysql_fix_extensions usr/bin/mysqld_safe usr/bin/mysql_explain_log usr/bin/mysqlimport usr/bin/myisamlog usr/bin/mysql_tzinfo_to_sql usr/bin/mysql_upgrade usr/bin/mysqltestmanager usr/bin/mysql_fix_privilege_tables usr/bin/mysql_find_rows usr/bin/mysql_convert_table_format usr/bin/mysqltestmanagerc usr/bin/mysqlhotcopy usr/bin/mysqldump usr/bin/mysqlshow usr/bin/mysqlbug usr/bin/mysql_config usr/bin/mysqldumpslow usr/bin/mysql_waitpid usr/bin/mysqlbinlog usr/bin/mysql_client_test usr/bin/perror usr/bin/mysql usr/bin/myisamchk usr/bin/mysql_setpermission usr/bin/mysqladmin usr/bin/mysql_zap usr/bin/mysql_tableinfo usr/bin/resolveip usr/share/ usr/share/mysql/ usr/share/mysql/errmsg.txt usr/share/mysql/swedish/ usr/share/mysql/swedish/errmsg.sys usr/share/mysql/mysql_system_tables_data.sql usr/share/mysql/mysql.server usr/share/mysql/hungarian/ usr/share/mysql/hungarian/errmsg.sys usr/share/mysql/norwegian/ usr/share/mysql/norwegian/errmsg.sys usr/share/mysql/slovak/ usr/share/mysql/slovak/errmsg.sys usr/share/mysql/spanish/ usr/share/mysql/spanish/errmsg.sys usr/share/mysql/polish/ usr/share/mysql/polish/errmsg.sys usr/share/mysql/ukrainian/ usr/share/mysql/ukrainian/errmsg.sys usr/share/mysql/danish/ usr/share/mysql/danish/errmsg.sys usr/share/mysql/romanian/ usr/share/mysql/romanian/errmsg.sys usr/share/mysql/english/ usr/share/mysql/english/errmsg.sys usr/share/mysql/charsets/ usr/share/mysql/charsets/latin2.xml usr/share/mysql/charsets/greek.xml usr/share/mysql/charsets/koi8r.xml usr/share/mysql/charsets/latin1.xml usr/share/mysql/charsets/cp866.xml usr/share/mysql/charsets/geostd8.xml usr/share/mysql/charsets/cp1250.xml usr/share/mysql/charsets/koi8u.xml usr/share/mysql/charsets/cp852.xml usr/share/mysql/charsets/hebrew.xml usr/share/mysql/charsets/latin7.xml usr/share/mysql/charsets/README usr/share/mysql/charsets/ascii.xml usr/share/mysql/charsets/cp1251.xml usr/share/mysql/charsets/macce.xml usr/share/mysql/charsets/latin5.xml usr/share/mysql/charsets/Index.xml usr/share/mysql/charsets/macroman.xml usr/share/mysql/charsets/cp1256.xml usr/share/mysql/charsets/keybcs2.xml usr/share/mysql/charsets/swe7.xml usr/share/mysql/charsets/armscii8.xml usr/share/mysql/charsets/dec8.xml usr/share/mysql/charsets/cp1257.xml usr/share/mysql/charsets/hp8.xml usr/share/mysql/charsets/cp850.xml usr/share/mysql/korean/ usr/share/mysql/korean/errmsg.sys usr/share/mysql/german/ usr/share/mysql/german/errmsg.sys usr/share/mysql/mi_test_all.res usr/share/mysql/greek/ usr/share/mysql/greek/errmsg.sys usr/share/mysql/french/ usr/share/mysql/french/errmsg.sys usr/share/mysql/mysql_fix_privilege_tables.sql usr/share/mysql/dutch/ usr/share/mysql/dutch/errmsg.sys usr/share/mysql/serbian/ usr/share/mysql/serbian/errmsg.sys usr/share/mysql/mysql_system_tables.sql usr/share/mysql/my-huge.cnf usr/share/mysql/portuguese/ usr/share/mysql/portuguese/errmsg.sys usr/share/mysql/japanese/ usr/share/mysql/japanese/errmsg.sys usr/share/mysql/mysql_test_data_timezone.sql usr/share/mysql/russian/ usr/share/mysql/russian/errmsg.sys usr/share/mysql/czech/ usr/share/mysql/czech/errmsg.sys usr/share/mysql/fill_help_tables.sql usr/share/mysql/estonian/ usr/share/mysql/estonian/errmsg.sys usr/share/mysql/my-medium.cnf usr/share/mysql/norwegian-ny/ usr/share/mysql/norwegian-ny/errmsg.sys usr/share/mysql/my-small.cnf usr/share/mysql/mysql-log-rotate usr/share/mysql/italian/ usr/share/mysql/italian/errmsg.sys usr/share/mysql/my-large.cnf usr/share/mysql/ndb-config-2-node.ini usr/share/mysql/binary-configure usr/share/mysql/mi_test_all usr/share/mysql/mysqld_multi.server usr/share/mysql/my-innodb-heavy-4G.cnf usr/doc/ usr/doc/mysql-5.0.67/ usr/doc/mysql-5.0.67/README usr/doc/mysql-5.0.67/Docs/ usr/doc/mysql-5.0.67/Docs/INSTALL-BINARY usr/doc/mysql-5.0.67/COPYING usr/info/ usr/info/mysql.info.gz usr/libexec/ usr/libexec/mysqld usr/libexec/mysqlmanager usr/man/ usr/man/man8/ usr/man/man8/mysqlmanager.8.gz usr/man/man8/mysqld.8.gz usr/man/man1/ usr/man/man1/mysql_zap.1.gz usr/man/man1/mysql_setpermission.1.gz usr/man/man1/mysql_tzinfo_to_sql.1.gz usr/man/man1/msql2mysql.1.gz usr/man/man1/mysql_tableinfo.1.gz usr/man/man1/mysql_explain_log.1.gz usr/man/man1/mysqlcheck.1.gz usr/man/man1/comp_err.1.gz usr/man/man1/my_print_defaults.1.gz usr/man/man1/mysqlbinlog.1.gz usr/man/man1/myisam_ftdump.1.gz usr/man/man1/mysql_upgrade.1.gz usr/man/man1/mysql.1.gz usr/man/man1/mysql_client_test.1.gz usr/man/man1/resolve_stack_dump.1.gz usr/man/man1/mysql_fix_extensions.1.gz usr/man/man1/mysqlmanagerc.1.gz usr/man/man1/mysql_config.1.gz usr/man/man1/mysqlshow.1.gz usr/man/man1/myisamlog.1.gz usr/man/man1/replace.1.gz usr/man/man1/mysqlmanager-pwgen.1.gz usr/man/man1/mysqltest.1.gz usr/man/man1/innochecksum.1.gz usr/man/man1/mysqladmin.1.gz usr/man/man1/perror.1.gz usr/man/man1/mysql_waitpid.1.gz usr/man/man1/mysql_convert_table_format.1.gz usr/man/man1/mysqlman.1.gz usr/man/man1/mysqlimport.1.gz usr/man/man1/mysqlbug.1.gz usr/man/man1/mysql_find_rows.1.gz usr/man/man1/myisampack.1.gz usr/man/man1/myisamchk.1.gz usr/man/man1/mysql_fix_privilege_tables.1.gz usr/man/man1/mysql-stress-test.pl.1.gz usr/man/man1/resolveip.1.gz usr/man/man1/make_win_bin_dist.1.gz usr/man/man1/mysqlhotcopy.1.gz usr/man/man1/mysqld_multi.1.gz usr/man/man1/safe_mysqld.1.gz usr/man/man1/mysql_secure_installation.1.gz usr/man/man1/mysql_install_db.1.gz usr/man/man1/mysqldump.1.gz usr/man/man1/mysql-test-run.pl.1.gz usr/man/man1/mysqld_safe.1.gz usr/man/man1/mysqlaccess.1.gz usr/man/man1/mysql.server.1.gz usr/man/man1/make_win_src_distribution.1.gz etc/ etc/rc.d/ etc/rc.d/rc.mysqld.new etc/my-huge.cnf etc/my-medium.cnf etc/my-small.cnf etc/my-large.cnf /etc/rc.d/rc.mysqld is an ordinary Slackware-type start/stop script: #!/bin/sh # Start/stop/restart mysqld. # # Copyright 2003 Patrick J. Volkerding, Concord, CA # Copyright 2003 Slackware Linux, Inc., Concord, CA # # This program comes with NO WARRANTY, to the extent permitted by law. # You may redistribute copies of this program under the terms of the # GNU General Public License. # To start MySQL automatically at boot, be sure this script is executable: # chmod 755 /etc/rc.d/rc.mysqld # Before you can run MySQL, you must have a database. To install an initial # database, do this as root: # # su - mysql # mysql_install_db # # Note that step one is becoming the mysql user. It's important to do this # before making any changes to the database, or mysqld won't be able to write # to it later (this can be fixed with 'chown -R mysql.mysql /var/lib/mysql'). # To allow outside connections to the database comment out the next line. # If you don't need incoming network connections, then leave the line # uncommented to improve system security. #SKIP="--skip-networking" # Start mysqld: mysqld_start() { if [ -x /usr/bin/mysqld_safe ]; then # If there is an old PID file (no mysqld running), clean it up: if [ -r /var/run/mysql/mysql.pid ]; then if ! ps axc | grep mysqld 1> /dev/null 2> /dev/null ; then echo "Cleaning up old /var/run/mysql/mysql.pid." rm -f /var/run/mysql/mysql.pid fi fi /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/run/mysql/mysql.pid $SKIP & fi } # Stop mysqld: mysqld_stop() { # If there is no PID file, ignore this request... if [ -r /var/run/mysql/mysql.pid ]; then killall mysqld # Wait at least one minute for it to exit, as we don't know how big the DB is... for second in 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 \ 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 60 ; do if [ ! -r /var/run/mysql/mysql.pid ]; then break; fi sleep 1 done if [ "$second" = "60" ]; then echo "WARNING: Gave up waiting for mysqld to exit!" sleep 15 fi fi } # Restart mysqld: mysqld_restart() { mysqld_stop mysqld_start } case "$1" in 'start') mysqld_start ;; 'stop') mysqld_stop ;; 'restart') mysqld_restart ;; *) echo "usage $0 start|stop|restart" esac But there's also an unexpected init script on the machine, named /etc/init.d/mysql: #!/bin/sh # Copyright Abandoned 1996 TCX DataKonsult AB & Monty Program KB & Detron HB # This file is public domain and comes with NO WARRANTY of any kind # MySQL daemon start/stop script. # Usually this is put in /etc/init.d (at least on machines SYSV R4 based # systems) and linked to /etc/rc3.d/S99mysql and /etc/rc0.d/K01mysql. # When this is done the mysql server will be started when the machine is # started and shut down when the systems goes down. # Comments to support chkconfig on RedHat Linux # chkconfig: 2345 64 36 # description: A very fast and reliable SQL database engine. # Comments to support LSB init script conventions ### BEGIN INIT INFO # Provides: mysql # Required-Start: $local_fs $network $remote_fs # Should-Start: ypbind nscd ldap ntpd xntpd # Required-Stop: $local_fs $network $remote_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: start and stop MySQL # Description: MySQL is a very fast and reliable SQL database engine. ### END INIT INFO # If you install MySQL on some other places than /usr, then you # have to do one of the following things for this script to work: # # - Run this script from within the MySQL installation directory # - Create a /etc/my.cnf file with the following information: # [mysqld] # basedir=<path-to-mysql-installation-directory> # - Add the above to any other configuration file (for example ~/.my.ini) # and copy my_print_defaults to /usr/bin # - Add the path to the mysql-installation-directory to the basedir variable # below. # # If you want to affect other MySQL variables, you should make your changes # in the /etc/my.cnf, ~/.my.cnf or other MySQL configuration files. # If you change base dir, you must also change datadir. These may get # overwritten by settings in the MySQL configuration files. #basedir= #datadir= # Default value, in seconds, afterwhich the script should timeout waiting # for server start. # Value here is overriden by value in my.cnf. # 0 means don't wait at all # Negative numbers mean to wait indefinitely service_startup_timeout=900 # The following variables are only set for letting mysql.server find things. # Set some defaults pid_file=/var/run/mysql/mysql.pid server_pid_file=/var/run/mysql/mysql.pid use_mysqld_safe=1 user=mysql if test -z "$basedir" then basedir=/usr bindir=/usr/bin if test -z "$datadir" then datadir=/var/lib/mysql fi sbindir=/usr/sbin libexecdir=/usr/libexec else bindir="$basedir/bin" if test -z "$datadir" then datadir="$basedir/data" fi sbindir="$basedir/sbin" libexecdir="$basedir/libexec" fi # datadir_set is used to determine if datadir was set (and so should be # *not* set inside of the --basedir= handler.) datadir_set= # # Use LSB init script functions for printing messages, if possible # lsb_functions="/lib/lsb/init-functions" if test -f $lsb_functions ; then . $lsb_functions else log_success_msg() { echo " SUCCESS! $@" } log_failure_msg() { echo " ERROR! $@" } fi PATH=/sbin:/usr/sbin:/bin:/usr/bin:$basedir/bin export PATH mode=$1 # start or stop shift other_args="$*" # uncommon, but needed when called from an RPM upgrade action # Expected: "--skip-networking --skip-grant-tables" # They are not checked here, intentionally, as it is the resposibility # of the "spec" file author to give correct arguments only. case `echo "testing\c"`,`echo -n testing` in *c*,-n*) echo_n= echo_c= ;; *c*,*) echo_n=-n echo_c= ;; *) echo_n= echo_c='\c' ;; esac parse_server_arguments() { for arg do case "$arg" in --basedir=*) basedir=`echo "$arg" | sed -e 's/^[^=]*=//'` bindir="$basedir/bin" if test -z "$datadir_set"; then datadir="$basedir/data" fi sbindir="$basedir/sbin" libexecdir="$basedir/libexec" ;; --datadir=*) datadir=`echo "$arg" | sed -e 's/^[^=]*=//'` datadir_set=1 ;; --user=*) user=`echo "$arg" | sed -e 's/^[^=]*=//'` ;; --pid-file=*) server_pid_file=`echo "$arg" | sed -e 's/^[^=]*=//'` ;; --service-startup-timeout=*) service_startup_timeout=`echo "$arg" | sed -e 's/^[^=]*=//'` ;; --use-mysqld_safe) use_mysqld_safe=1;; --use-manager) use_mysqld_safe=0;; esac done } parse_manager_arguments() { for arg do case "$arg" in --pid-file=*) pid_file=`echo "$arg" | sed -e 's/^[^=]*=//'` ;; --user=*) user=`echo "$arg" | sed -e 's/^[^=]*=//'` ;; esac done } wait_for_pid () { verb="$1" manager_pid="$2" # process ID of the program operating on the pid-file i=0 avoid_race_condition="by checking again" while test $i -ne $service_startup_timeout ; do case "$verb" in 'created') # wait for a PID-file to pop into existence. test -s $pid_file && i='' && break ;; 'removed') # wait for this PID-file to disappear test ! -s $pid_file && i='' && break ;; *) echo "wait_for_pid () usage: wait_for_pid created|removed manager_pid" exit 1 ;; esac # if manager isn't running, then pid-file will never be updated if test -n "$manager_pid"; then if kill -0 "$manager_pid" 2>/dev/null; then : # the manager still runs else # The manager may have exited between the last pid-file check and now. if test -n "$avoid_race_condition"; then avoid_race_condition="" continue # Check again. fi # there's nothing that will affect the file. log_failure_msg "Manager of pid-file quit without updating file." return 1 # not waiting any more. fi fi echo $echo_n ".$echo_c" i=`expr $i + 1` sleep 1 done if test -z "$i" ; then log_success_msg return 0 else log_failure_msg return 1 fi } # Get arguments from the my.cnf file, # the only group, which is read from now on is [mysqld] if test -x ./bin/my_print_defaults then print_defaults="./bin/my_print_defaults" elif test -x $bindir/my_print_defaults then print_defaults="$bindir/my_print_defaults" elif test -x $bindir/mysql_print_defaults then print_defaults="$bindir/mysql_print_defaults" else # Try to find basedir in /etc/my.cnf conf=/etc/my.cnf print_defaults= if test -r $conf then subpat='^[^=]*basedir[^=]*=\(.*\)$' dirs=`sed -e "/$subpat/!d" -e 's//\1/' $conf` for d in $dirs do d=`echo $d | sed -e 's/[ ]//g'` if test -x "$d/bin/my_print_defaults" then print_defaults="$d/bin/my_print_defaults" break fi if test -x "$d/bin/mysql_print_defaults" then print_defaults="$d/bin/mysql_print_defaults" break fi done fi # Hope it's in the PATH ... but I doubt it test -z "$print_defaults" && print_defaults="my_print_defaults" fi # # Read defaults file from 'basedir'. If there is no defaults file there # check if it's in the old (depricated) place (datadir) and read it from there # extra_args="" if test -r "$basedir/my.cnf" then extra_args="-e $basedir/my.cnf" else if test -r "$datadir/my.cnf" then extra_args="-e $datadir/my.cnf" fi fi parse_server_arguments `$print_defaults $extra_args mysqld server mysql_server mysql.server` # Look for the pidfile parse_manager_arguments `$print_defaults $extra_args manager` # # Set pid file if not given # if test -z "$pid_file" then pid_file=$datadir/mysqlmanager-`/bin/hostname`.pid else case "$pid_file" in /* ) ;; * ) pid_file="$datadir/$pid_file" ;; esac fi if test -z "$server_pid_file" then server_pid_file=$datadir/`/bin/hostname`.pid else case "$server_pid_file" in /* ) ;; * ) server_pid_file="$datadir/$server_pid_file" ;; esac fi case "$mode" in 'start') # Start daemon # Safeguard (relative paths, core dumps..) cd $basedir manager=$bindir/mysqlmanager if test -x $libexecdir/mysqlmanager then manager=$libexecdir/mysqlmanager elif test -x $sbindir/mysqlmanager then manager=$sbindir/mysqlmanager fi echo $echo_n "Starting MySQL" if test -x $manager -a "$use_mysqld_safe" = "0" then if test -n "$other_args" then log_failure_msg "MySQL manager does not support options '$other_args'" exit 1 fi # Give extra arguments to mysqld with the my.cnf file. This script may # be overwritten at next upgrade. $manager --user=$user --pid-file=$pid_file >/dev/null 2>&1 & wait_for_pid created $!; return_value=$? # Make lock for RedHat / SuSE if test -w /var/lock/subsys then touch /var/lock/subsys/mysqlmanager fi exit $return_value elif test -x $bindir/mysqld_safe then # Give extra arguments to mysqld with the my.cnf file. This script # may be overwritten at next upgrade. pid_file=$server_pid_file $bindir/mysqld_safe --datadir=$datadir --pid-file=$server_pid_file $other_args >/dev/null 2>&1 & wait_for_pid created $!; return_value=$? # Make lock for RedHat / SuSE if test -w /var/lock/subsys then touch /var/lock/subsys/mysql fi exit $return_value else log_failure_msg "Couldn't find MySQL manager ($manager) or server ($bindir/mysqld_safe)" fi ;; 'stop') # Stop daemon. We use a signal here to avoid having to know the # root password. # The RedHat / SuSE lock directory to remove lock_dir=/var/lock/subsys/mysqlmanager # If the manager pid_file doesn't exist, try the server's if test ! -s "$pid_file" then pid_file=$server_pid_file lock_dir=/var/lock/subsys/mysql fi if test -s "$pid_file" then mysqlmanager_pid=`cat $pid_file` echo $echo_n "Shutting down MySQL" kill $mysqlmanager_pid # mysqlmanager should remove the pid_file when it exits, so wait for it. wait_for_pid removed "$mysqlmanager_pid"; return_value=$? # delete lock for RedHat / SuSE if test -f $lock_dir then rm -f $lock_dir fi exit $return_value else log_failure_msg "MySQL manager or server PID file could not be found!" fi ;; 'restart') # Stop the service and regardless of whether it was # running or not, start it again. if $0 stop $other_args; then $0 start $other_args else log_failure_msg "Failed to stop running server, so refusing to try to start." exit 1 fi ;; 'reload'|'force-reload') if test -s "$server_pid_file" ; then read mysqld_pid < $server_pid_file kill -HUP $mysqld_pid && log_success_msg "Reloading service MySQL" touch $server_pid_file else log_failure_msg "MySQL PID file could not be found!" exit 1 fi ;; 'status') # First, check to see if pid file exists if test -s "$server_pid_file" ; then read mysqld_pid < $server_pid_file if kill -0 $mysqld_pid 2>/dev/null ; then log_success_msg "MySQL running ($mysqld_pid)" exit 0 else log_failure_msg "MySQL is not running, but PID file exists" exit 1 fi else # Try to find appropriate mysqld process mysqld_pid=`pidof $sbindir/mysqld` if test -z $mysqld_pid ; then if test "$use_mysqld_safe" = "0" ; then lockfile=/var/lock/subsys/mysqlmanager else lockfile=/var/lock/subsys/mysql fi if test -f $lockfile ; then log_failure_msg "MySQL is not running, but lock exists" exit 2 fi log_failure_msg "MySQL is not running" exit 3 else log_failure_msg "MySQL is running but PID file could not be found" exit 4 fi fi ;; *) # usage echo "Usage: $0 {start|stop|restart|reload|force-reload|status} [ MySQL server options ]" exit 1 ;; esac exit 0 An unimportant aside: The previous users of the machine kept a messy home directory. Their home directory was /root. I've pasted a copy at http://www.pastebin.ca/2167496. My question: Why is there a /etc/init.d/mysql file on this Slackware machine? How could it have gotten there? P.S. This question is far from perfect. Please feel free to edit it.

    Read the article

  • DENY select on sys.dm_db_index_physical_stats

    - by steveh99999
    Technorati Tags: security,DMV,permission,sys.dm_db_index_physical_stats I recently saw an interesting blog article by Paul Randal about the performance overhead of querying the sys.dm_db_index_physical_stats. So I was thinking, would it be possible to let non-sysadmin users query DMVs on a SQL server but stop them querying this I/O intensive DMV ? Yes it is, here’s how… 1. Create a new login for test purposes, with permissions to access AdventureWorks database only … CREATE LOGIN [test] WITH PASSWORD='xxxx', DEFAULT_DATABASE=[AdventureWorks] GO USE [AdventureWorks] GO CREATE USER [test] FOR LOGIN [test] WITH DEFAULT_SCHEMA=[dbo] GO 2.login as user test and issue command SELECT  * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'),NULL,NULL,NULL,'DETAILED') gets error :-  Msg 297, Level 16, State 12, Line 1 The user does not have permission to perform this action. 3.As a sysadmin, issue command :- USE AdventureWorks GRANT VIEW DATABASE STATE TO [test] or GRANT VIEW SERVER STATE TO [test] if all databases can be queried via DMV. 4. Try again as user test to issue command SELECT * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks '),NULL,NULL,NULL,'DETAILED') -- now produces valid results from the DMV.. 5 now create the test user in master database, public role only USE master CREATE USER [test] FOR LOGIN [test] 6 issue command :- USE master DENY SELECT ON sys.dm_db_index_physical_stats TO [test] 7 Now go back to AdventureWorks using test login and try SELECT * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks’),NULL,NULL,NULL,’DETAILED') Now gets error... Msg 229, Level 14, State 5, Line 1 The SELECT permission was denied on the object 'dm_db_index_physical_stats', database 'mssqlsystemresource', schema 'sys'. but the user is still able to query all other non-IO-intensive DMVs. If the user attempts to view the index physical stats via a builtin management studio report  – see recent blog post by Pinal Dave they get an error also

    Read the article

  • Which tools should be used for data migration between environments?

    - by Paula Speranza-Hadley
    Ø  With the Oracle Utilities Application Framework based products there are a number of tools provided that can be used to transfer data from one environment to another. Ø  There are three main tools that implementations use: §  ConfigLab - A configurable copy facility is metadata aware and therefore understands the relationships between objects and by invoking the relevant maintenance objects validates the data copied. This utility uses the object validation to help ensure data integrity. Basically it is a set of configuration tables and a set of batch jobs to perform the migration of data. §  Bundling - A configurable release management tool that allows exporting of Advanced Configuration Environment based objects (business services, business objects, UI Maps etc) from one environment to another. §  Blueprint - An Oracle Utilities Software Development Kit (SDK) based tool to import metadata from the development environment to your initial testing environment. The utility is command line based and basically uses a text based configuration file to drive the utility on the source and target sides. Ø  Each tool has a role in an implementation but you must be careful to use the right tool for the right job within an implementation. The suggestions are as follows: §  Only use the Blueprint tool for migrating data from your development platform to your initial test environment. The blueprint tool is not designed to move large amounts of data and certainly is risky, if not used correctly, and can potentially break the integrity of your data. §  The SDK provides the configuration data that it is used for (mainly meta-data). This should not be extended as, while it can perform data migration on any data, it is not efficient and risky for certain types of configuration data. Ø  Additional information can be found in the following whitepaper:  Oracle Utilities Application Framework - Release Management - Software Configuration Management on MyOracle.com

    Read the article

  • Is it OK to have multiple asserts in a single unit test?

    - by Restuta
    I think that there are some cases when multiple assertions are needed (e.g. Guard Assertion), but in general I try to avoid this. What is your opinion? Please provide a real word examples when multiple asserts are really needed. Thanks! Edit In the comment to this great post Roy Osherove pointed to the OAPT project that is designed to run each assert in a single test. This is written on projects home page: Proper unit tests should fail for exactly one reason, that’s why you should be using one assert per unit test. And also Roy wrote in comments: My guideline is usually that you test one logical CONCEPT per test. you can have multiple asserts on the same object. they will usually be the same concept being tested.

    Read the article

  • Have unit test generators helped you when working with legacy code?

    - by Duncan Bayne
    I am looking at a small (~70kLOC including generated) C# (.NET 4.0, some Silverlight) code-base that has very low test coverage. The code itself works in that it has passed user acceptance testing, but it is brittle and in some areas not very well factored. I would like to add solid unit test coverage around the legacy code using the usual suspects (NMock, NUnit, StatLight for the Silverlight bits). My normal approach is to start working through the project, unit testing & refactoring, until I am satisfied with the state of the code. I've done this many times in the past, and it's worked well. However, this time I'm thinking of using a test generator (in particular Pex) to create the test framework, then manually fleshing it out. My question is: have you used unit test generators in the past when commencing work on a legacy codebase, and if so, would you recommend them? My fear is that the generated tests will miss the semantic nuances of the code-base, leading to the dreaded situation of having tests for the sake of the coverage metric, rather than tests which clearly express the intended behaviour in code.

    Read the article

  • Does TDD's "Obvious Implementation" mean code first, test after?

    - by natasky
    My friend and I are relatively new TDD and have a dispute about the "Obvious Implementation" technique (from "TDD By Example" by Kent Beck). My friend says it means that if the implementation is obvious, you should go ahead and write it - before any test for that new behavior. And indeed the book says: How do you implement simple operations? Just implement them. Also: Sometimes you are sure you know how to implement an operation. Go ahead. I think what the author means is you should test first, and then "just implement" it - as opposed to the "Fake It ('Till You Make It)" and other techniques, which require smaller steps in the implementation stage. Also after these quotes the author talks about getting "red bars" (failing tests) when doing "Obvious Implementation" - how can you get a red bar without a test?. Yet I couldn't find any quote from the book saying "obvious" still means test first. What do you think? Should we test first or after when the implementation is "obvious" (according to TDD, of course)? Do you know a book or blog post saying just that?

    Read the article

  • What's My Problem? What's Your Problem?

    - by Jacek Ziabicki
    Software installers are not made for building demo environments. I can say this much after 12 years (on and off) of supporting my fellow sales consultants with environments for software demonstrations. When we release software, we include installation programs and procedures that are designed for use by our clients – to build a production environment and a limited number of testing, training and development environments. Different Objectives Your priorities when building an environment for client use vs. building a demo environment are very different. In a production environment, security, stability, and performance concerns are paramount. These environments are built on a specific server and rarely, if ever, moved to a different server or different network address. There is typically just one application running on a particular server (physical or virtual). Once built, the environment will be used for months or years at a time. Because of security considerations, the installation program wants to make these environments very specific to the organization using the software and the use case, encoding a fully qualified name of the server, or even the IP address on the network, in the configuration. So you either go through the installation procedure for each environment, or learn how to clone and reconfigure the software as a separate instance to build all your non-production environments. This may not matter much if the installation is as simple as clicking on the Setup program. But for enterprise applications, you have a number of configuration settings that you need to get just right – so whether you are installing from scratch or reconfiguring an existing installation, this requires both time and expertise in the particular piece of software. If you need a setup of several applications that are integrated to talk to one another, it is a whole new level of complexity. Now you need the expertise in all of the applications involved (plus the supporting technology products), and in addition to making each application work, you also have to configure the integration endpoints. Each application needs the URLs and credentials to call the integration layer, and the integration must be able to call each application. Then you have to make sure that each app has the right data so a business process initiated in one application can continue in the next. And, you will need to check that each application has the correct version and patch level for the integration to work. When building demo environments, your #1 concern is agility. If you can get away with a small number of long-running environments, you are lucky. More likely, you may get a request for a dedicated environment for a demonstration that is two weeks away: how quickly can you make this available so we still have the time to build the client-specific data? We are running a hands-on workshop next month, and we’ll need 15 instances of application X environment so each student can have a separate server for the exercises. We cannot connect to our data center from the client site, the client’s security policy won’t allow our VPN to go through – so we need a portable environment that we can bring with us. Our consultants need to be able to work at the hotel, airport, and the airplane, so we really want an environment that can run on a laptop. The client will need two playpen environments running in the cloud, accessible from their network, for a series of workshops that start two weeks from now. We have seen all of these scenarios and more. Here you would be much better served by a generic installation that would be easy to clone. Welcome to the Wonder Machine The reason I started this blog is to share a particular design of a demo environment, a special way to install software, that can address the above requirements, even for integrated setups. This design was created by a team at Oracle Utilities Global Business Unit, and we are using this setup for most of our demo environments. In a bout of modesty we called it the Wonder Machine. Over the next few posts – think of it as a novel in parts – I will tell you about the big idea, how it was implemented and what you can do with it. After we have laid down the groundwork, I would like to share some tips and tricks for users of our Wonder Machine implementation, as well as things I am learning about building portable, cloneable environments. The Wonder Machine is by no means a closed specification, it is under active development! I am hoping this blog will be of interest to two groups of readers – the users of the Wonder Machine we have built at Oracle Utilities, who want to get the most out of their demo environments and be able to reconfigure it to their needs – and to people who need to build environments for demonstration, testing, training, development and would like to make them cloneable and portable to maximize the reuse of their effort. Surely we are not the only ones facing this problem? If you can think of a better way to solve it, or if you can help us improve on our concept, I will appreciate your comments!

    Read the article

  • Should I make a seperate unit test for a method, if it only modifies the parent state?

    - by Dante
    Should classes, that modify the state of the parent class, but not itself, be unit tested separately? And by separately, I mean putting the test in the corresponding unit test class, that tests that specific class. I'm developing a library based on chained methods, that return a new instance of a new type in most cases, where a chained method is called. The returned instances only modify the root parent state, but not itself. Overly simplified example, to get the point across: public class BoxedRabbits { private readonly Box _box; public BoxedRabbits(Box box) { _box = box; } public void SetCount(int count) { _box.Items += count; } } public class Box { public int Items { get; set; } public BoxedRabbits AddRabbits() { return new BoxedRabbits(this); } } var box = new Box(); box.AddRabbits().SetCount(14); Say, if I write a unit test under the Box class unit tests: box.AddRabbits().SetCount(14) I could effectively say, that I've already tested the BoxedRabbits class as well. Is this the wrong way of approaching this, even though it's far simpler to first write a test for the above call, then to first write a unit test for the BoxedRabbits separately?

    Read the article

  • protected abstract override Foo(); &ndash; er... what?

    - by Muljadi Budiman
    A couple of weeks back, a co-worker was pondering a situation he was facing.  He was looking at the following class hierarchy: abstract class OriginalBase { protected virtual void Test() { } } abstract class SecondaryBase : OriginalBase { } class FirstConcrete : SecondaryBase { } class SecondConcrete : SecondaryBase { } Basically, the first 2 classes are abstract classes, but the OriginalBase class has Test implemented as a virtual method.  What he needed was to force concrete class implementations to provide a proper body for the Test method, but he can’t do mark the method as abstract since it is already implemented in the OriginalBase class. One way to solve this is to hide the original implementation and then force further derived classes to properly implemented another method that will replace it.  The code will look like the following: abstract class OriginalBase { protected virtual void Test() { } } abstract class SecondaryBase : OriginalBase { protected sealed override void Test() { Test2(); } protected abstract void Test2(); } class FirstConcrete : SecondaryBase { // Have to override Test2 here } class SecondConcrete : SecondaryBase { // Have to override Test2 here } With the above code, SecondaryBase class will seal the Test method so it can no longer be overridden.  Then it also made an abstract method Test2 available, which will force the concrete classes to override and provide the proper implementation.  Calling Test will properly call the proper Test2 implementation in each respective concrete classes. I was wondering if there’s a way to tell the compiler to treat the Test method in SecondaryBase as abstract, and apparently you can, by combining the abstract and override keywords.  The code looks like the following: abstract class OriginalBase { protected virtual void Test() { } } abstract class SecondaryBase : OriginalBase { protected abstract override void Test(); } class FirstConcrete : SecondaryBase { // Have to override Test here } class SecondConcrete : SecondaryBase { // Have to override Test here } The method signature makes it look a bit funky, because most people will treat the override keyword to mean you then need to provide the implementation as well, but the effect is exactly as we desired.  The concepts are still valid: you’re overriding the Test method from its original implementation in the OriginalBase class, but you don’t want to implement it, rather you want to classes that derive from SecondaryBase to provide the proper implementation, so you also make it as an abstract method. I don’t think I’ve ever seen this before in the wild, so it was pretty neat to find that the compiler does support this case.

    Read the article

  • Un ordinateur réussit pour la première fois le test de Turing en se faisant passer pour un garçon de 13 ans

    Un ordinateur réussit pour la première fois le test de Turing en se faisant passer pour un garçon de 13 ansUn ordinateur grâce à un programme informatique a réussi pour la première fois à convaincre des chercheurs qu'il était un enfant de 13 ans, devenant ainsi la première machine à passer le test Turing.L'exploit réalisé par cette machine marque une date qui sera probablement écrite dans les annales de l'informatique et plus précisément de l'intelligence artificielle. Le test de Turing a été établi...

    Read the article

  • Is the test, which touches the filenames under directory, a kind of unittest? [on hold]

    - by Chen OT
    I was told that unittest is fast and the tests which touches DB, across network, and touches FileSystem are not unittest. In one of my testcases, its input are the file names (amount about 300~400) under a specific folder. Although these input are part of file system, the execution time of this test is very fast. Should I moved this test, which is fast but touches file system, to higher level test?

    Read the article

  • How best to construct our test subjects in unit tests?

    - by Liath
    Some of our business logic classes require quite a few dependencies (in our case 7-10). As such when we come to unit test these the creation become quite complex. In most tests these dependencies are often not required (only some dependencies are required for particular methods). As a result unit tests often require a significant number of lines of code to mock up these useless dependencies (which can't be null because of null checks). For example: [Test] public void TestMethodA() { var dependency5 = new Mock<IDependency1>(); dependency5.Setup(x => x. // some setup var sut = new Sut(new Mock<IDependency1>().Object, new Mock<IDependency2>().Object, new Mock<IDependency3>().Object, new Mock<IDependency4>().Object, dependency5); Assert.SomeAssert(sut.MethodA()); } In this example almost half the test is taken up creating dependencies which aren't used. I've investigated an approach where I have a helper method. [Test] public void TestMethodA() { var dependency5 = new Mock<IDependency1>(); dependency5.Setup(x => x. // some setup var sut = CreateSut(null, null, null, null, dependency5); Assert.SomeAssert(sut.MethodA()); } private Sut CreateSut(IDependency1 d1, IDependency2 d2...) { return new Sut(d1 ?? new Mock<IDependency1>().Object, d2 ?? new Mock<IDependency2>().Object, } But these often grow very complicated very quickly. What is the best way to create these BLL classes in test classes to reduce complexity and simplify tests?

    Read the article

  • How do I use MediaRecorder to record video without causing a segmentation fault?

    - by rabidsnail
    I'm trying to use android.media.MediaRecorder to record video, and no matter what I do the android runtime segmentation faults when I call prepare(). Here's an example: public void onCreate(Bundle savedInstanceState) { Log.i("video test", "making recorder"); MediaRecorder recorder = new MediaRecorder(); contentResolver = getContentResolver(); try { super.onCreate(savedInstanceState); Log.i("video test", "--------------START----------------"); SurfaceView target_view = new SurfaceView(this); Log.i("video test", "making surface"); Surface target = target_view.getHolder().getSurface(); Log.i("video test", target.toString()); Log.i("video test", "new recorder"); recorder = new MediaRecorder(); Log.i("video test", "set display"); recorder.setPreviewDisplay(target); Log.i("video test", "pushing surface"); setContentView(target_view); Log.i("video test", "set audio source"); recorder.setAudioSource(MediaRecorder.AudioSource.MIC); Log.i("video test", "set video source"); recorder.setVideoSource(MediaRecorder.VideoSource.DEFAULT); Log.i("video test", "set output format"); recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); Log.i("video test", "set audio encoder"); recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); Log.i("video test", "set video encoder"); recorder.setVideoEncoder(MediaRecorder.VideoEncoder.MPEG_4_SP); Log.i("video test", "set max duration"); recorder.setMaxDuration(3600); Log.i("video test", "set on info listener"); recorder.setOnInfoListener(new listener()); Log.i("video test", "set video size"); recorder.setVideoSize(320, 240); Log.i("video test", "set video frame rate"); recorder.setVideoFrameRate(15); Log.i("video test", "set output file"); recorder.setOutputFile(get_path(this, "foo.3gp")); Log.i("video test", "prepare"); recorder.prepare(); Log.i("video test", "start"); recorder.start(); Log.i("video test", "sleep"); Thread.sleep(3600); Log.i("video test", "stop"); recorder.stop(); Log.i("video test", "release"); recorder.release(); Log.i("video test", "-----------------SUCCESS------------------"); finish(); } catch (Exception e) { Log.i("video test", e.toString()); recorder.reset(); recorder.release(); Log.i("video tets", "-------------------FAIL-------------------"); finish(); } } public static String get_path (Context context, String fname) { String path = context.getFileStreamPath("foo").getParentFile().getAbsolutePath(); String res = path+"/"+fname; Log.i("video test", "path: "+res); return res; } class listener implements MediaRecorder.OnInfoListener { public void onInfo(MediaRecorder recorder, int what, int extra) { Log.i("video test", "Video Info: "+what+", "+extra); } }

    Read the article

  • AssemblyCleanup() after test fail/exception

    - by Tommy Jakobsen
    Hello, I'm running a few unit tests that requires a connection to the database. When my test project get initialized, a snapshot of the database is created, and when tests are done the database gets restored back to the snapshot. Here is the implementation: [TestClass] public static class AssemblyInitializer { [AssemblyInitialize()] public static void AssemblyInit(TestContext context) { var dbss = new DatabaseSnapshot(...); dbss.CreateSnapshot(); } [AssemblyCleanup()] public static void AssemblyCleanup() { var dbss = new DatabaseSnapshot(...); dbss.RevertDatabase(); } } Now this all works, but my problem arise when I have a failing test or some exception. The AssemblyCleanup is of course not invoked, so how can I solve this problem? No matter what happens, the snapshot has to be restored. Is this possible?

    Read the article

  • Visual Studio 2010 and TFS 2008: Building unit test projects

    - by Peter
    Hi, We are currently taking VS2010 for a testdrive and so far we are a little stumped with how it just won't cooperate with our existing Team Foundation Server 2008. We still have all our projects on .NET 3.5 and whenever we are now building a solution that contains a unit test project (which automatically builds in .NET 4.0) the TFS won't build it. The .NET 4.0 framework is installed on the TFS 2008. The error we're receiving is: [Any CPU/Release] c:\WINDOWS\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(0,0): warning MSB3245: Could not resolve this reference. Could not locate the assembly "Microsoft.VisualStudio.QualityTools.UnitTestFramework, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=MSIL". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. As a temporary workaround we are now forced to remove all our test projects in order for our solutions to build.

    Read the article

  • Identify in a unit test if Jetbrains IntelliJ IDEA 8 or 9 is running

    - by Ran Biron
    I need to know, in a context of a unit test, if Jetbrains IntelliJ idea is the test initiator and if it is, which version is running (or at least if it's "9" or earlier). I don't mind doing dirty tricks (reflection, filesystem, environment...) but I do need this to work "out-of-the-box" (without each developer having to setup something special). I've tried some reflection but couldn't find a unique identifier I could latch onto. Any idea? Oh - the language is Java.

    Read the article

  • Making ehcache read-write for test code and read-only for production code

    - by Rick
    I would like to annotate many of my Hibernate entities that contain reference data and/or configuration data with @Cache(usage = CacheConcurrencyStrategy.READ_ONLY) However, my JUnit tests are setting up and tearing down some of this reference/configuration data using the Hibernate entities. Is there a recommended way of having entities be read-write during test setup and teardown but read-only for production code? Two of my immediate thoughts for non-ideal workarounds are: Using NONSTRICT_READ_WRITE, but I am not sure what the hidden downsides are. Creating subclassed entities in my test code to override the read-only cache annotation. Any recommendations on the cleanest way to handle this? (Note: Project uses maven.)

    Read the article

  • Best way to unit-test WCF REST/SOAP service while dynamically generating stubs

    - by James Black
    I have a webservice written with WCF 4.0 that exposes REST and SOAP functions, and I want to set up my unit tests so that as I work on my web services I can quickly test by having the test framework start up the service, outside of IIS, and then do the tests. I want it to be dynamically generated as I am not certain what the interface will look like, and it is easier to not worry about having to generate the stubs before I start the tests. But, I couldn't get Groovy to work with my web service, so I am curious if Iron Python or Iron Ruby would work well for this, or is there another .NET language that may work well for this.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >