Search Results

Search found 25284 results on 1012 pages for 'test driven'.

Page 49/1012 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Create static instances of a class inside said class in Python

    - by Samir Talwar
    Apologies if I've got the terminology wrong here—I can't think what this particular idiom would be called. I've been trying to create a Python 3 class that statically declares instances of itself inside itself—sort of like an enum would work. Here's a simplified version of the code I wrote: class Test: A = Test("A") B = Test("B") def __init__(self, value): self.value = value def __str__(self): return "Test: " + self.value print(str(Test.A)) print(str(Test.B)) Writing this, I got an exception on line 2 (A = Test("A")). I assume line 3 would also error if it had made it that far. Using __class__ instead of Test gives the same error. File "<stdin>", line 1, in <module> File "<stdin>", line 2, in Test NameError: name 'Test' is not defined Is there any way to refer to the current class in a static context in Python? I could declare these particular variables outside the class or in a separate class, but for clarity's sake, I'd rather not if I can help it. To better demonstrate what I'm trying to do, here's the same example in Java: public class Test { private static final Test A = new Test("A"); private static final Test B = new Test("B"); private final String value; public Test(String value) { this.value = value; } public String toString() { return "Test: " + value; } public static void main(String[] args) { System.out.println(A); System.out.println(B); } } This works as you would expect: it prints: Test: A Test: B How can I do the same thing in Python?

    Read the article

  • Javascript: How to test JSONP on localhost

    - by hqt
    Here is the way I test JSONP: I run XAMPP, and in folder htdocs I create a javascript folder . I create json1.json file contain some data to process. After that, I run this html file locally, and it will call a function in "other machine" (in this case is localhost) Here is my code : <head> <script> function updateSales(sales) { alert('test jsonp'); // real code here } </script> </head> <body> <h1>JSONP Example</h1> <script src="http://localhost:85/javascript/json1.json?callback=updateSales"></script> </body> But when I run, nothing happen. But if change to other real json on the internet, it will work. It means change line : <script src="http://localhost:85/javascript/json1.json?callback=updateSales"></script> to line: <script src="http://gumball.wickedlysmart.com/?callback=updateSales"></script> It will work smoothly. So, I don't know how to test JSONP by using localhost, please help me. Thanks :)

    Read the article

  • About Interview structure for test automation lab developers

    - by Ikaso
    Hi, I am interviewing new applicants for a team that is doing test automation on our company product(s). The team is composed of junior software developers and a team leader. The product runs on windows and has both managed and unmanaged parts. The test automation is done on both client side (user mode and kernel mode) and server side (IIS, Windows Services, backend). We are doing mainly intergration tests and black box tests. I am trying to figure out how to organize my interview. My overall idea is to ask about a project they have done, then ask some technical questions (multithreading, GC, design patterns) and one programming question. Please note that there is another interview done before me with 2 programming questions. My programming question is rather simple (for example: reversing a singly-linked linked list). My coworkers think that my questions will not find good developers since my questions are rather simple and well known, but so far most of the applicants fail those questions. My questions are: Should I change the structure of my interview for this kind of job? What questions do you ask to figure our if the applicant is test oriented? (Maybe I should provide a buggy implementation of a problem and let them find the bugs and then ask them about what tests they would have done) Regards,

    Read the article

  • How to map a test onto a list of numbers

    - by Arthur Ulfeldt
    I have a function with a bug: user> (-> 42 int-to-bytes bytes-to-int) 42 user> (-> 128 int-to-bytes bytes-to-int) -128 user> looks like I need to handle overflow when converting back... Better write a test to make sure this never happens again. This project is using clojure.contrib.test-is so i write: (deftest int-to-bytes-to-int (let [lots-of-big-numbers (big-test-numbers)] (map #(is (= (-> % int-to-bytes bytes-to-int) %)) lots-of-big-numbers))) This should be testing converting to a seq of bytes and back again produces the origional result on a list of 10000 random numbers. Looks OK in theory? except none of the tests ever run. Testing com.cryptovide.miscTest Ran 23 tests containing 34 assertions. 0 failures, 0 errors. why don't the tests run? what can I do to make them run?

    Read the article

  • How to unit test generic classes

    - by Rowland Shaw
    I'm trying to set up some unit tests for an existing compact framework class library. However, I've fallen at the first hurdle, where it appears that the test framework is unable to load the types involved (even though they're both in the class library being tested) Test method MyLibrary.Tests.MyGenericClassTest.MyMethodTest threw exception: System.MissingMethodException: Could not load type 'MyLibrary.MyType' from assembly 'MyLibrary, Version=1.0.3778.36113, Culture=neutral, PublicKeyToken=null'.. My code is loosely: public class MyGenericClass<T> : List<T> where T : MyType, new() { public bool MyMethod(T foo) { throw new NotImplementedException(); } } With test methods: public void MyMethodTestHelper<T>() where T : MyType, new() { MyGenericClass<T> target = new MyGenericClass<T>(); foo = new T(); expected = true; actual = target.MyMethod(foo); Assert.AreEqual(expected, actual); } [TestMethod()] public void MyMethodTest() { MyMethodTestHelper<MyType>(); } I'm a bit stumped though, as I can't even get it to break in the debugger to get to the inner exception, so what else do I check? EDIT this does seem to be something specific to the Compact Framework - recompiling the class libraries and the unit tests for the full framework, gives the expected output (i.e. the debugger stops when I'm going to throw a NotImplementedException).

    Read the article

  • How do you unit-test a method with complex input-output

    - by Dan
    When you have a simple method, like for example sum(int x, int y), it is easy to write unit tests. You can check that method will sum correctly two sample integers, for example 2 + 3 should return 5, then you will check the same for some "extraordinary" numbers, for example negative values and zero. Each of these should be separate unit test, as a single unit test should contain single assert. What do you do when you have a complex input-output? Take a Xml parser for example. You can have a single method parse(String xml) that receives the String and returns a Dom object. You can write separate tests that will check that certain text node is parsed correctly, that attributes are parsed OK, that child node belongs to parent etc. For all these I can write a simple input, for example <root><child/></root> that will be used to check parent-child relationships between nodes and so on for the rest of expectations. Now, take a look at follwing Xml: <root> <child1 attribute11="attribute 11 value" attribute12="attribute 12 value">Text 1</child1> <child2 attribute21="attribute 21 value" attribute22="attribute 22 value">Text 2</child2> </root> In order to check that method worked correctly, I need to check many complex conditions, like that attribute11 and attribute12 belong to element1, that Text 1 belongs to child1 etc. I do not want to put more than one assert in my unit-test. How can I accomplish that?

    Read the article

  • How to test routes that don't include controller?

    - by Darren Green
    I'm using minitest in Rails to do testing, but I'm running into a problem that I hope a more seasoned tester can help me out with because I've tried looking everywhere for the answer, but it doesn't seem that anyone has run into this problem or if they have, they opted for an integration test. Let's say I have a controller called Foo and action in it called bar. So the foo_controller.rb file looks like this: class FooController < ApplicationController def bar render 'bar', :layout => 'application' end end The thing is that I don't want people to access the "foo/bar" route directly. So I have a route that is get 'baz' => 'foo#bar'. Now I want to test the FooController: require 'minitest_helper' class FooControllerTest < ActionController::TestCase def test_should_get_index get '/baz' end end But the test results in an error that No route matches {:controller=>"foo", :action=>"/baz"}. How do I specify the controller for the GET request? Sorry if this is a dumb question. It's been very hard for me to find the answer.

    Read the article

  • In ActionScript, is there a way to test for existence of variable with datatype "Function"

    - by Robusto
    So I have a class where I instantiate a variable callback like so: public var callback:Function; So far so good. Now, I want to add an event listener to this class and test for existence of the callback. I'm doing like so: this.addEventListener(MouseEvent.MOUSE_OVER, function(event:MouseEvent) : void { if (callback) { // do some things } }); This works great, doesn't throw any errors, but everywhere I test for callback I get the following warning: 3553: Function value used where type Boolean was expected. Possibly the parentheses () are missing after this function reference. That bugged me, so I tried to get rid of the warning by testing for null and undefined. Those caused errors. I can't instantiate a Function as null, either. I know, I know, real programmers only care about errors, not warnings. I will survive if this situation is not resolved. But it bothers me! :) Am I just being neurotic, or is there actually some way to test whether a real Function has been created without the IDE bitching about it?

    Read the article

  • Unit Testing Interfaces in Python

    - by Nicholas Mancuso
    I am currently learning python in preperation for a class over the summer and have gotten started by implementing different types of heaps and priority based data structures. I began to write a unit test suite for the project but ran into difficulties into creating a generic unit test that only tests the interface and is oblivious of the actual implementation. I am wondering if it is possible to do something like this.. suite = HeapTestSuite(BinaryHeap()) suite.run() suite = HeapTestSuite(BinomialHeap()) suite.run() What I am currently doing just feels... wrong (multiple inheritance? ACK!).. class TestHeap: def reset_heap(self): self.heap = None def test_insert(self): self.reset_heap() #test that insert doesnt throw an exception... for x in self.inseq: self.heap.insert(x) def test_delete(self): #assert we get the first value we put in self.reset_heap() self.heap.insert(5) self.assertEquals(5, self.heap.delete_min()) #harder test. put in sequence in and check that it comes out right self.reset_heap() for x in self.inseq: self.heap.insert(x) for x in xrange(len(self.inseq)): val = self.heap.delete_min() self.assertEquals(val, x) class BinaryHeapTest(TestHeap, unittest.TestCase): def setUp(self): self.inseq = range(99, -1, -1) self.heap = BinaryHeap() def reset_heap(self): self.heap = BinaryHeap() class BinomialHeapTest(TestHeap, unittest.TestCase): def setUp(self): self.inseq = range(99, -1, -1) self.heap = BinomialHeap() def reset_heap(self): self.heap = BinomialHeap() if __name__ == '__main__': unittest.main()

    Read the article

  • Test for `point` within an attachment in `mail-mode`

    - by lawlist
    I'm looking for a better test to determine when point is within a hidden attachment in mail-mode (which is used by wl-draft-mode). The attachments are mostly hidden and look like this: --[[application/xls Content-Disposition: attachment; filename="hello-world.xls"][base64]] The test of invisible-p yields a result of nil. I am current using the following test, but it seems rather poor: (save-excursion (goto-char (point-max)) (goto-char (previous-char-property-change (point))) (goto-char (previous-char-property-change (point))) (re-search-backward "]]" (point-at-bol) t))) Any suggestions would be greatly appreciated. Here is the full snippet: (goto-char (point-max)) (cond ((= (save-excursion (abs (skip-chars-backward "\n\t"))) 0) (insert "\n\n")) ((and (= (save-excursion (abs (skip-chars-backward "\n\t"))) 1) (not (save-excursion (goto-char (previous-char-property-change (point))) (goto-char (previous-char-property-change (point))) (re-search-backward "]]" (point-at-bol) t)))) (insert "\n"))) GOAL:  If there are no attachments and no new lines at the end of the buffer, then insert \n\n and then insert the attachment thereafter. If there is just one new line at the end of the buffer, then insert \n and then insert the attachment thereafter. If there is an attachment at the end of the buffer, then do not insert any new lines.

    Read the article

  • Building a J2EE dev/test setup on a single PC

    - by John
    It's been a while since I did Java work, and even then I was never responsible for starting a large project from the very start... there were test/staging/production systems already running, etc, etc. Now I am looking to start a J2EE project from scratch on my trusty workstation, which has never been used for Java development and runs Windows 7 64bit. First of all, I'll be getting Eclipse. As far as writing the code goes I'm pretty happy. And running it through Eclipse is OK, but what I'd really want is to have a VM running MySQL and TomCat on which I can properly deploy my project and run/debug it 'remotely' from my dev PC. And I guess this should be done using Ant instead of letting Eclipse build the WAR for me, so that I don't end up with a dependence on Eclipse. I'm certain Eclipse can do this, so you hit a button and it runs Ant scripts, deploys and debugs for instance, but very hazy on it. Are there any good guides on this? I don't want to be taught Java, or even Ant, but rather the 'glue' parts like getting my test VM up and running under Windows, getting a build/test/deploy/run pipeline running through Eclipse, etc. One point, I only plan to use Windows... hosting a Windows VM on my Windwos desktop. And while I can use command-line tools like ant/svn, I'm much more a GUI person who loves IDE integration... I'd rather this didn't end up an argument about Linux or Vi, etc! I am looking for free, but am a MAPS subscriber, and run Win7 Ultimate in case that makes a difference as far as free VM solutions.

    Read the article

  • Visual studio 2008 unit test keeps failing

    - by Gerbrand
    I've create a method that calculates the harmonic mean based on a list of doubles. But when I'm running the test it keeps failing even thou the output result are the same. My harmonic mean method: public static double GetHarmonicMean(List<double> parameters) { var cumReciprocal = 0.0d; var countN = parameters.Count; foreach( var param in parameters) { cumReciprocal += 1.0d/param; } return 1.0d/(cumReciprocal/countN); } My test method: [TestMethod()] public void GetHarmonicMeanTest() { var parameters = new List<double> { 1.5d, 2.3d, 2.9d, 1.9d, 5.6d }; const double expected = 2.32432293165495; var actual = OwnFunctions.GetHarmonicMean(parameters); Assert.AreEqual(expected, actual); } After running the test the following message is showing: Assert.AreEqual failed. Expected:<2.32432293165495. Actual:<2.32432293165495. For me that are both the same values. Can somebody explain this? Or am I doing something wrong?

    Read the article

  • Android AsyncTask testing problem with Android Test Framework

    - by Vlad
    I have a very simple AsyncTask implementation example and have problem to test it using Android JUnit framework. It works just fine when I instantiate and execute it in normal application. However when it's executed from any of Android Testing framework classes (i.e. AndroidTestCase, ActivityUnitTestCase, ActivityInstrumentationTestCase2 etc) it behaves sarngely: - It executes doInBackground() method correctly - However it doesn't invokes any of its notification methods (onPostExecute(), onProgressUpdate(), etc) -- just silently ignores them whitout showing any errors. This is very simple AsyncTask example package kroz.andcookbook.threads.asynctask; import android.os.AsyncTask; import android.util.Log; import android.widget.ProgressBar; import android.widget.Toast; public class AsyncTaskDemo extends AsyncTask<Integer, Integer, String> { AsyncTaskDemoActivity _parentActivity; int _counter; int _maxCount; public AsyncTaskDemo(AsyncTaskDemoActivity asyncTaskDemoActivity) { _parentActivity = asyncTaskDemoActivity; } @Override protected void onPreExecute() { super.onPreExecute(); _parentActivity._progressBar.setVisibility(ProgressBar.VISIBLE); _parentActivity._progressBar.invalidate(); } @Override protected String doInBackground(Integer... params) { _maxCount = params[0]; for (_counter = 0; _counter <= _maxCount; _counter++) { try { Thread.sleep(1000); publishProgress(_counter); } catch (InterruptedException e) { // Ignore } } } @Override protected void onProgressUpdate(Integer... values) { super.onProgressUpdate(values); int progress = values[0]; String progressStr = "Counting " + progress + " out of " + _maxCount; _parentActivity._textView.setText(progressStr); _parentActivity._textView.invalidate(); } @Override protected void onPostExecute(String result) { super.onPostExecute(result); _parentActivity._progressBar.setVisibility(ProgressBar.INVISIBLE); _parentActivity._progressBar.invalidate(); } @Override protected void onCancelled() { super.onCancelled(); _parentActivity._textView.setText("Request to cancel AsyncTask"); } } This is a test case. Here AsyncTaskDemoActivity is a very simple Activity providing UI for testing AsyncTask in mode: package kroz.andcookbook.test.threads.asynctask; import java.util.concurrent.ExecutionException; import kroz.andcookbook.R; import kroz.andcookbook.threads.asynctask.AsyncTaskDemo; import kroz.andcookbook.threads.asynctask.AsyncTaskDemoActivity; import android.content.Intent; import android.test.ActivityUnitTestCase; import android.widget.Button; public class AsyncTaskDemoTest2 extends ActivityUnitTestCase<AsyncTaskDemoActivity> { AsyncTaskDemo _atask; private Intent _startIntent; public AsyncTaskDemoTest2() { super(AsyncTaskDemoActivity.class); } protected void setUp() throws Exception { super.setUp(); _startIntent = new Intent(Intent.ACTION_MAIN); } protected void tearDown() throws Exception { super.tearDown(); } public final void testExecute() { startActivity(_startIntent, null, null); Button btnStart = (Button) getActivity().findViewById(R.id.Button01); btnStart.performClick(); assertNotNull(getActivity()); } } All this code is working just fine, except the fact that AsynTask doesn't invoke it's notification methods when executed by whithin Android Testing Framework. Any ideas?

    Read the article

  • How to mock/stub calls to message taglib in Grails controller

    - by Dave
    I've got a Grails controller which relies on the message taglib to resolve an i18n message: class TokenController { def passwordReset = { def token = DatedToken.findById(params.id); if (!isValidToken(token, params)) { flash.message = message(code: "forgotPassword.reset.invalidToken") redirect controller: 'forgotPassword', action: 'index' return } render view:'/forgotPassword/reset', model: [token: token.token] } } I've written a unit test for the controller: class TokenControllerTests extends ControllerUnitTestCase { void testPasswordResetInvalidTokenRedirect() { controller.passwordReset() assert... } } Since the message taglib is called in the controller I get a MissingMethodException: groovy.lang.MissingMethodException: No signature of method: TokenController.message() is applicable for argument types: (java.util.LinkedHashMap) values: [[code:forgotPassword.reset.invalidToken]] Does anyone know the best way to get around this issue in a unit test? Ideally I would like to perform assertions on the message but right now I'd be happy if the test just ran! Thanks

    Read the article

  • How do I run JUnit tests from inside my java application?

    - by corgrath
    Is it possible to run JUnit tests from inside my java application? Are there test frameworks I can use (such as JUnit.jar?), or am I force to find the test files, invoke the methods and track the exceptions myself? The reason why I am asking is my application requires a lot of work to start launch (lots of dependencies and configurations, etc) and using an external testing tool (like JUnit Ant task) would require a lot of work to set up. It is easier to start the application and then inside the application run my tests. Is there an easy test framework that runs tests and output results from inside a java application or am I forced to write my own framework?

    Read the article

  • Django: text fixture fails to load

    - by Esteban Feldman
    Hi all, Did a dumpdata of my project, then in my new test I added it to fixtures. from django.test import TestCase class TestGoal(TestCase): fixtures = ['test_data.json'] def test_goal(self): """ Tests that 1 + 1 always equals 2. """ self.failUnlessEqual(1 + 1, 2) When running the test I get: Problem installing fixture 'XXX/fixtures/test_data.json': DoesNotExist: XXX matching query does not exist. But manually doing loaddata works fine does not when the db is empty. I do a dropdb, createdb a simple syncdb the try loaddata and it fails, same error. Any clue? Python version 2.6.5, Django 1.1.1

    Read the article

  • How to make pytest display a custom string representation for fixture parameters?

    - by Björn Pollex
    When using builtin types as fixture parameters, pytest prints out the value of the parameters in the test report. For example: @fixture(params=['hello', 'world'] def data(request): return request.param def test_something(data): pass Running this with py.test --verbose will print something like: test_example.py:7: test_something[hello] PASSED test_example.py:7: test_something[world] PASSED Note that the value of the parameter is printed in square brackets after the test name. Now, when using an object of a user-defined class as parameter, like so: class Param(object): def __init__(self, text): self.text = text @fixture(params=[Param('hello'), Param('world')] def data(request): return request.param def test_something(data): pass pytest will simply enumerate the number of values (p0, p1, etc.): test_example.py:7: test_something[p0] PASSED test_example.py:7: test_something[p1] PASSED This behavior does not change even when the user-defined class provides custom __str__ and __repr__ implementations. Is there any way to make pytest display something more useful than just p0 here? I am using pytest 2.5.2 on Python 2.7.6 on Windows 7.

    Read the article

  • Maven/TestNG reports "Failures: 0" but then "There are test failures.", what's wrong?

    - by JohnS
    I'm using Maven 2.2.1 r801777, Surefire 2.7.1, TestNG 5.14.6, Java 1.6.0_11 on Win XP. I have only one test class with one empty test method and in my pom I have just added TestNG dependency. When I execute mvn test it prints out: ------------------------------------------------------- T E S T S ------------------------------------------------------- Running TestSuite Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.301 sec Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] There are test failures. Please refer to [...]\target\surefire-reports for the individual test results. There is no error in test reports and with -e switch: [INFO] Trace org.apache.maven.BuildFailureException: There are test failures. Please refer to [...]\target\surefire-reports for the individual test results. at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:715) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:556) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:535) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoFailureException: There are test failures. Please refer to [...]\target\surefire-reports for the individual test results. at org.apache.maven.plugin.surefire.SurefirePlugin.execute(SurefirePlugin.java:575) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more Any idea? EDIT My pom: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.sample</groupId> <artifactId>sample</artifactId> <name>sample</name> <packaging>jar</packaging> <version>0.0.1-SNAPSHOT</version> <description /> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> <encoding>UTF-8</encoding> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>org.testng</groupId> <artifactId>testng</artifactId> <version>5.14.6</version> <scope>test</scope> </dependency> </dependencies> </project> The only class that I have: import org.testng.Assert; import org.testng.annotations.Test; @Test public class MyTest { @Test public void test() { Assert.assertEquals("a", "a"); } }

    Read the article

  • Why is there a /etc/init.d/mysql file on this Slackware machine? How could it have gotten there?

    - by jasonspiro
    A client of my IT-consulting service owns a web-development shop. He's been having problems with a Slackware 12.0 server running MySQL 5.0.67. The machine was set up by the client's sysadmin, who left on bad terms. My client no longer employs a sysadmin. As far as I can tell, the only copy of MySQL that's installed is the one described in /var/log/packages/mysql-5.0.67-i486-1: PACKAGE NAME: mysql-5.0.67-i486-1 COMPRESSED PACKAGE SIZE: 16828 K UNCOMPRESSED PACKAGE SIZE: 33840 K PACKAGE LOCATION: /var/slapt-get/archives/./slackware/ap/mysql-5.0.67-i486-1.tgz PACKAGE DESCRIPTION: mysql: mysql (SQL-based relational database server) mysql: mysql: MySQL is a fast, multi-threaded, multi-user, and robust SQL mysql: (Structured Query Language) database server. It comes with a nice API mysql: which makes it easy to integrate into other applications. mysql: mysql: The home page for MySQL is http://www.mysql.com/ mysql: mysql: mysql: mysql: FILE LIST: ./ var/ var/lib/ var/lib/mysql/ var/run/ var/run/mysql/ install/ install/doinst.sh install/slack-desc usr/ usr/include/ usr/include/mysql/ usr/include/mysql/my_alloc.h usr/include/mysql/sql_common.h usr/include/mysql/my_dbug.h usr/include/mysql/errmsg.h usr/include/mysql/my_pthread.h usr/include/mysql/my_list.h usr/include/mysql/mysql.h usr/include/mysql/sslopt-vars.h usr/include/mysql/my_config.h usr/include/mysql/mysql_com.h usr/include/mysql/m_string.h usr/include/mysql/sslopt-case.h usr/include/mysql/my_xml.h usr/include/mysql/sql_state.h usr/include/mysql/my_global.h usr/include/mysql/my_sys.h usr/include/mysql/mysqld_ername.h usr/include/mysql/mysqld_error.h usr/include/mysql/sslopt-longopts.h usr/include/mysql/keycache.h usr/include/mysql/my_net.h usr/include/mysql/mysql_version.h usr/include/mysql/my_no_pthread.h usr/include/mysql/decimal.h usr/include/mysql/readline.h usr/include/mysql/my_attribute.h usr/include/mysql/typelib.h usr/include/mysql/my_dir.h usr/include/mysql/raid.h usr/include/mysql/m_ctype.h usr/include/mysql/mysql_embed.h usr/include/mysql/mysql_time.h usr/include/mysql/my_getopt.h usr/lib/ usr/lib/mysql/ usr/lib/mysql/libmysqlclient_r.so.15.0.0 usr/lib/mysql/libmysqlclient_r.la usr/lib/mysql/libmyisammrg.a usr/lib/mysql/libmystrings.a usr/lib/mysql/libmyisam.a usr/lib/mysql/libmysqlclient.so.15.0.0 usr/lib/mysql/libmysqlclient_r.a usr/lib/mysql/libmysqlclient.a usr/lib/mysql/libheap.a usr/lib/mysql/libvio.a usr/lib/mysql/libmysqlclient.la usr/lib/mysql/libmysys.a usr/lib/mysql/libdbug.a usr/bin/ usr/bin/comp_err usr/bin/my_print_defaults usr/bin/resolve_stack_dump usr/bin/msql2mysql usr/bin/mysqltestmanager-pwgen usr/bin/myisampack usr/bin/replace usr/bin/mysqld_multi usr/bin/mysqlaccess usr/bin/mysql_install_db usr/bin/innochecksum usr/bin/myisam_ftdump usr/bin/mysqlcheck usr/bin/mysqltest usr/bin/mysql_upgrade_shell usr/bin/mysql_secure_installation usr/bin/mysql_fix_extensions usr/bin/mysqld_safe usr/bin/mysql_explain_log usr/bin/mysqlimport usr/bin/myisamlog usr/bin/mysql_tzinfo_to_sql usr/bin/mysql_upgrade usr/bin/mysqltestmanager usr/bin/mysql_fix_privilege_tables usr/bin/mysql_find_rows usr/bin/mysql_convert_table_format usr/bin/mysqltestmanagerc usr/bin/mysqlhotcopy usr/bin/mysqldump usr/bin/mysqlshow usr/bin/mysqlbug usr/bin/mysql_config usr/bin/mysqldumpslow usr/bin/mysql_waitpid usr/bin/mysqlbinlog usr/bin/mysql_client_test usr/bin/perror usr/bin/mysql usr/bin/myisamchk usr/bin/mysql_setpermission usr/bin/mysqladmin usr/bin/mysql_zap usr/bin/mysql_tableinfo usr/bin/resolveip usr/share/ usr/share/mysql/ usr/share/mysql/errmsg.txt usr/share/mysql/swedish/ usr/share/mysql/swedish/errmsg.sys usr/share/mysql/mysql_system_tables_data.sql usr/share/mysql/mysql.server usr/share/mysql/hungarian/ usr/share/mysql/hungarian/errmsg.sys usr/share/mysql/norwegian/ usr/share/mysql/norwegian/errmsg.sys usr/share/mysql/slovak/ usr/share/mysql/slovak/errmsg.sys usr/share/mysql/spanish/ usr/share/mysql/spanish/errmsg.sys usr/share/mysql/polish/ usr/share/mysql/polish/errmsg.sys usr/share/mysql/ukrainian/ usr/share/mysql/ukrainian/errmsg.sys usr/share/mysql/danish/ usr/share/mysql/danish/errmsg.sys usr/share/mysql/romanian/ usr/share/mysql/romanian/errmsg.sys usr/share/mysql/english/ usr/share/mysql/english/errmsg.sys usr/share/mysql/charsets/ usr/share/mysql/charsets/latin2.xml usr/share/mysql/charsets/greek.xml usr/share/mysql/charsets/koi8r.xml usr/share/mysql/charsets/latin1.xml usr/share/mysql/charsets/cp866.xml usr/share/mysql/charsets/geostd8.xml usr/share/mysql/charsets/cp1250.xml usr/share/mysql/charsets/koi8u.xml usr/share/mysql/charsets/cp852.xml usr/share/mysql/charsets/hebrew.xml usr/share/mysql/charsets/latin7.xml usr/share/mysql/charsets/README usr/share/mysql/charsets/ascii.xml usr/share/mysql/charsets/cp1251.xml usr/share/mysql/charsets/macce.xml usr/share/mysql/charsets/latin5.xml usr/share/mysql/charsets/Index.xml usr/share/mysql/charsets/macroman.xml usr/share/mysql/charsets/cp1256.xml usr/share/mysql/charsets/keybcs2.xml usr/share/mysql/charsets/swe7.xml usr/share/mysql/charsets/armscii8.xml usr/share/mysql/charsets/dec8.xml usr/share/mysql/charsets/cp1257.xml usr/share/mysql/charsets/hp8.xml usr/share/mysql/charsets/cp850.xml usr/share/mysql/korean/ usr/share/mysql/korean/errmsg.sys usr/share/mysql/german/ usr/share/mysql/german/errmsg.sys usr/share/mysql/mi_test_all.res usr/share/mysql/greek/ usr/share/mysql/greek/errmsg.sys usr/share/mysql/french/ usr/share/mysql/french/errmsg.sys usr/share/mysql/mysql_fix_privilege_tables.sql usr/share/mysql/dutch/ usr/share/mysql/dutch/errmsg.sys usr/share/mysql/serbian/ usr/share/mysql/serbian/errmsg.sys usr/share/mysql/mysql_system_tables.sql usr/share/mysql/my-huge.cnf usr/share/mysql/portuguese/ usr/share/mysql/portuguese/errmsg.sys usr/share/mysql/japanese/ usr/share/mysql/japanese/errmsg.sys usr/share/mysql/mysql_test_data_timezone.sql usr/share/mysql/russian/ usr/share/mysql/russian/errmsg.sys usr/share/mysql/czech/ usr/share/mysql/czech/errmsg.sys usr/share/mysql/fill_help_tables.sql usr/share/mysql/estonian/ usr/share/mysql/estonian/errmsg.sys usr/share/mysql/my-medium.cnf usr/share/mysql/norwegian-ny/ usr/share/mysql/norwegian-ny/errmsg.sys usr/share/mysql/my-small.cnf usr/share/mysql/mysql-log-rotate usr/share/mysql/italian/ usr/share/mysql/italian/errmsg.sys usr/share/mysql/my-large.cnf usr/share/mysql/ndb-config-2-node.ini usr/share/mysql/binary-configure usr/share/mysql/mi_test_all usr/share/mysql/mysqld_multi.server usr/share/mysql/my-innodb-heavy-4G.cnf usr/doc/ usr/doc/mysql-5.0.67/ usr/doc/mysql-5.0.67/README usr/doc/mysql-5.0.67/Docs/ usr/doc/mysql-5.0.67/Docs/INSTALL-BINARY usr/doc/mysql-5.0.67/COPYING usr/info/ usr/info/mysql.info.gz usr/libexec/ usr/libexec/mysqld usr/libexec/mysqlmanager usr/man/ usr/man/man8/ usr/man/man8/mysqlmanager.8.gz usr/man/man8/mysqld.8.gz usr/man/man1/ usr/man/man1/mysql_zap.1.gz usr/man/man1/mysql_setpermission.1.gz usr/man/man1/mysql_tzinfo_to_sql.1.gz usr/man/man1/msql2mysql.1.gz usr/man/man1/mysql_tableinfo.1.gz usr/man/man1/mysql_explain_log.1.gz usr/man/man1/mysqlcheck.1.gz usr/man/man1/comp_err.1.gz usr/man/man1/my_print_defaults.1.gz usr/man/man1/mysqlbinlog.1.gz usr/man/man1/myisam_ftdump.1.gz usr/man/man1/mysql_upgrade.1.gz usr/man/man1/mysql.1.gz usr/man/man1/mysql_client_test.1.gz usr/man/man1/resolve_stack_dump.1.gz usr/man/man1/mysql_fix_extensions.1.gz usr/man/man1/mysqlmanagerc.1.gz usr/man/man1/mysql_config.1.gz usr/man/man1/mysqlshow.1.gz usr/man/man1/myisamlog.1.gz usr/man/man1/replace.1.gz usr/man/man1/mysqlmanager-pwgen.1.gz usr/man/man1/mysqltest.1.gz usr/man/man1/innochecksum.1.gz usr/man/man1/mysqladmin.1.gz usr/man/man1/perror.1.gz usr/man/man1/mysql_waitpid.1.gz usr/man/man1/mysql_convert_table_format.1.gz usr/man/man1/mysqlman.1.gz usr/man/man1/mysqlimport.1.gz usr/man/man1/mysqlbug.1.gz usr/man/man1/mysql_find_rows.1.gz usr/man/man1/myisampack.1.gz usr/man/man1/myisamchk.1.gz usr/man/man1/mysql_fix_privilege_tables.1.gz usr/man/man1/mysql-stress-test.pl.1.gz usr/man/man1/resolveip.1.gz usr/man/man1/make_win_bin_dist.1.gz usr/man/man1/mysqlhotcopy.1.gz usr/man/man1/mysqld_multi.1.gz usr/man/man1/safe_mysqld.1.gz usr/man/man1/mysql_secure_installation.1.gz usr/man/man1/mysql_install_db.1.gz usr/man/man1/mysqldump.1.gz usr/man/man1/mysql-test-run.pl.1.gz usr/man/man1/mysqld_safe.1.gz usr/man/man1/mysqlaccess.1.gz usr/man/man1/mysql.server.1.gz usr/man/man1/make_win_src_distribution.1.gz etc/ etc/rc.d/ etc/rc.d/rc.mysqld.new etc/my-huge.cnf etc/my-medium.cnf etc/my-small.cnf etc/my-large.cnf /etc/rc.d/rc.mysqld is an ordinary Slackware-type start/stop script: #!/bin/sh # Start/stop/restart mysqld. # # Copyright 2003 Patrick J. Volkerding, Concord, CA # Copyright 2003 Slackware Linux, Inc., Concord, CA # # This program comes with NO WARRANTY, to the extent permitted by law. # You may redistribute copies of this program under the terms of the # GNU General Public License. # To start MySQL automatically at boot, be sure this script is executable: # chmod 755 /etc/rc.d/rc.mysqld # Before you can run MySQL, you must have a database. To install an initial # database, do this as root: # # su - mysql # mysql_install_db # # Note that step one is becoming the mysql user. It's important to do this # before making any changes to the database, or mysqld won't be able to write # to it later (this can be fixed with 'chown -R mysql.mysql /var/lib/mysql'). # To allow outside connections to the database comment out the next line. # If you don't need incoming network connections, then leave the line # uncommented to improve system security. #SKIP="--skip-networking" # Start mysqld: mysqld_start() { if [ -x /usr/bin/mysqld_safe ]; then # If there is an old PID file (no mysqld running), clean it up: if [ -r /var/run/mysql/mysql.pid ]; then if ! ps axc | grep mysqld 1> /dev/null 2> /dev/null ; then echo "Cleaning up old /var/run/mysql/mysql.pid." rm -f /var/run/mysql/mysql.pid fi fi /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/run/mysql/mysql.pid $SKIP & fi } # Stop mysqld: mysqld_stop() { # If there is no PID file, ignore this request... if [ -r /var/run/mysql/mysql.pid ]; then killall mysqld # Wait at least one minute for it to exit, as we don't know how big the DB is... for second in 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 \ 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 60 ; do if [ ! -r /var/run/mysql/mysql.pid ]; then break; fi sleep 1 done if [ "$second" = "60" ]; then echo "WARNING: Gave up waiting for mysqld to exit!" sleep 15 fi fi } # Restart mysqld: mysqld_restart() { mysqld_stop mysqld_start } case "$1" in 'start') mysqld_start ;; 'stop') mysqld_stop ;; 'restart') mysqld_restart ;; *) echo "usage $0 start|stop|restart" esac But there's also an unexpected init script on the machine, named /etc/init.d/mysql: #!/bin/sh # Copyright Abandoned 1996 TCX DataKonsult AB & Monty Program KB & Detron HB # This file is public domain and comes with NO WARRANTY of any kind # MySQL daemon start/stop script. # Usually this is put in /etc/init.d (at least on machines SYSV R4 based # systems) and linked to /etc/rc3.d/S99mysql and /etc/rc0.d/K01mysql. # When this is done the mysql server will be started when the machine is # started and shut down when the systems goes down. # Comments to support chkconfig on RedHat Linux # chkconfig: 2345 64 36 # description: A very fast and reliable SQL database engine. # Comments to support LSB init script conventions ### BEGIN INIT INFO # Provides: mysql # Required-Start: $local_fs $network $remote_fs # Should-Start: ypbind nscd ldap ntpd xntpd # Required-Stop: $local_fs $network $remote_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: start and stop MySQL # Description: MySQL is a very fast and reliable SQL database engine. ### END INIT INFO # If you install MySQL on some other places than /usr, then you # have to do one of the following things for this script to work: # # - Run this script from within the MySQL installation directory # - Create a /etc/my.cnf file with the following information: # [mysqld] # basedir=<path-to-mysql-installation-directory> # - Add the above to any other configuration file (for example ~/.my.ini) # and copy my_print_defaults to /usr/bin # - Add the path to the mysql-installation-directory to the basedir variable # below. # # If you want to affect other MySQL variables, you should make your changes # in the /etc/my.cnf, ~/.my.cnf or other MySQL configuration files. # If you change base dir, you must also change datadir. These may get # overwritten by settings in the MySQL configuration files. #basedir= #datadir= # Default value, in seconds, afterwhich the script should timeout waiting # for server start. # Value here is overriden by value in my.cnf. # 0 means don't wait at all # Negative numbers mean to wait indefinitely service_startup_timeout=900 # The following variables are only set for letting mysql.server find things. # Set some defaults pid_file=/var/run/mysql/mysql.pid server_pid_file=/var/run/mysql/mysql.pid use_mysqld_safe=1 user=mysql if test -z "$basedir" then basedir=/usr bindir=/usr/bin if test -z "$datadir" then datadir=/var/lib/mysql fi sbindir=/usr/sbin libexecdir=/usr/libexec else bindir="$basedir/bin" if test -z "$datadir" then datadir="$basedir/data" fi sbindir="$basedir/sbin" libexecdir="$basedir/libexec" fi # datadir_set is used to determine if datadir was set (and so should be # *not* set inside of the --basedir= handler.) datadir_set= # # Use LSB init script functions for printing messages, if possible # lsb_functions="/lib/lsb/init-functions" if test -f $lsb_functions ; then . $lsb_functions else log_success_msg() { echo " SUCCESS! $@" } log_failure_msg() { echo " ERROR! $@" } fi PATH=/sbin:/usr/sbin:/bin:/usr/bin:$basedir/bin export PATH mode=$1 # start or stop shift other_args="$*" # uncommon, but needed when called from an RPM upgrade action # Expected: "--skip-networking --skip-grant-tables" # They are not checked here, intentionally, as it is the resposibility # of the "spec" file author to give correct arguments only. case `echo "testing\c"`,`echo -n testing` in *c*,-n*) echo_n= echo_c= ;; *c*,*) echo_n=-n echo_c= ;; *) echo_n= echo_c='\c' ;; esac parse_server_arguments() { for arg do case "$arg" in --basedir=*) basedir=`echo "$arg" | sed -e 's/^[^=]*=//'` bindir="$basedir/bin" if test -z "$datadir_set"; then datadir="$basedir/data" fi sbindir="$basedir/sbin" libexecdir="$basedir/libexec" ;; --datadir=*) datadir=`echo "$arg" | sed -e 's/^[^=]*=//'` datadir_set=1 ;; --user=*) user=`echo "$arg" | sed -e 's/^[^=]*=//'` ;; --pid-file=*) server_pid_file=`echo "$arg" | sed -e 's/^[^=]*=//'` ;; --service-startup-timeout=*) service_startup_timeout=`echo "$arg" | sed -e 's/^[^=]*=//'` ;; --use-mysqld_safe) use_mysqld_safe=1;; --use-manager) use_mysqld_safe=0;; esac done } parse_manager_arguments() { for arg do case "$arg" in --pid-file=*) pid_file=`echo "$arg" | sed -e 's/^[^=]*=//'` ;; --user=*) user=`echo "$arg" | sed -e 's/^[^=]*=//'` ;; esac done } wait_for_pid () { verb="$1" manager_pid="$2" # process ID of the program operating on the pid-file i=0 avoid_race_condition="by checking again" while test $i -ne $service_startup_timeout ; do case "$verb" in 'created') # wait for a PID-file to pop into existence. test -s $pid_file && i='' && break ;; 'removed') # wait for this PID-file to disappear test ! -s $pid_file && i='' && break ;; *) echo "wait_for_pid () usage: wait_for_pid created|removed manager_pid" exit 1 ;; esac # if manager isn't running, then pid-file will never be updated if test -n "$manager_pid"; then if kill -0 "$manager_pid" 2>/dev/null; then : # the manager still runs else # The manager may have exited between the last pid-file check and now. if test -n "$avoid_race_condition"; then avoid_race_condition="" continue # Check again. fi # there's nothing that will affect the file. log_failure_msg "Manager of pid-file quit without updating file." return 1 # not waiting any more. fi fi echo $echo_n ".$echo_c" i=`expr $i + 1` sleep 1 done if test -z "$i" ; then log_success_msg return 0 else log_failure_msg return 1 fi } # Get arguments from the my.cnf file, # the only group, which is read from now on is [mysqld] if test -x ./bin/my_print_defaults then print_defaults="./bin/my_print_defaults" elif test -x $bindir/my_print_defaults then print_defaults="$bindir/my_print_defaults" elif test -x $bindir/mysql_print_defaults then print_defaults="$bindir/mysql_print_defaults" else # Try to find basedir in /etc/my.cnf conf=/etc/my.cnf print_defaults= if test -r $conf then subpat='^[^=]*basedir[^=]*=\(.*\)$' dirs=`sed -e "/$subpat/!d" -e 's//\1/' $conf` for d in $dirs do d=`echo $d | sed -e 's/[ ]//g'` if test -x "$d/bin/my_print_defaults" then print_defaults="$d/bin/my_print_defaults" break fi if test -x "$d/bin/mysql_print_defaults" then print_defaults="$d/bin/mysql_print_defaults" break fi done fi # Hope it's in the PATH ... but I doubt it test -z "$print_defaults" && print_defaults="my_print_defaults" fi # # Read defaults file from 'basedir'. If there is no defaults file there # check if it's in the old (depricated) place (datadir) and read it from there # extra_args="" if test -r "$basedir/my.cnf" then extra_args="-e $basedir/my.cnf" else if test -r "$datadir/my.cnf" then extra_args="-e $datadir/my.cnf" fi fi parse_server_arguments `$print_defaults $extra_args mysqld server mysql_server mysql.server` # Look for the pidfile parse_manager_arguments `$print_defaults $extra_args manager` # # Set pid file if not given # if test -z "$pid_file" then pid_file=$datadir/mysqlmanager-`/bin/hostname`.pid else case "$pid_file" in /* ) ;; * ) pid_file="$datadir/$pid_file" ;; esac fi if test -z "$server_pid_file" then server_pid_file=$datadir/`/bin/hostname`.pid else case "$server_pid_file" in /* ) ;; * ) server_pid_file="$datadir/$server_pid_file" ;; esac fi case "$mode" in 'start') # Start daemon # Safeguard (relative paths, core dumps..) cd $basedir manager=$bindir/mysqlmanager if test -x $libexecdir/mysqlmanager then manager=$libexecdir/mysqlmanager elif test -x $sbindir/mysqlmanager then manager=$sbindir/mysqlmanager fi echo $echo_n "Starting MySQL" if test -x $manager -a "$use_mysqld_safe" = "0" then if test -n "$other_args" then log_failure_msg "MySQL manager does not support options '$other_args'" exit 1 fi # Give extra arguments to mysqld with the my.cnf file. This script may # be overwritten at next upgrade. $manager --user=$user --pid-file=$pid_file >/dev/null 2>&1 & wait_for_pid created $!; return_value=$? # Make lock for RedHat / SuSE if test -w /var/lock/subsys then touch /var/lock/subsys/mysqlmanager fi exit $return_value elif test -x $bindir/mysqld_safe then # Give extra arguments to mysqld with the my.cnf file. This script # may be overwritten at next upgrade. pid_file=$server_pid_file $bindir/mysqld_safe --datadir=$datadir --pid-file=$server_pid_file $other_args >/dev/null 2>&1 & wait_for_pid created $!; return_value=$? # Make lock for RedHat / SuSE if test -w /var/lock/subsys then touch /var/lock/subsys/mysql fi exit $return_value else log_failure_msg "Couldn't find MySQL manager ($manager) or server ($bindir/mysqld_safe)" fi ;; 'stop') # Stop daemon. We use a signal here to avoid having to know the # root password. # The RedHat / SuSE lock directory to remove lock_dir=/var/lock/subsys/mysqlmanager # If the manager pid_file doesn't exist, try the server's if test ! -s "$pid_file" then pid_file=$server_pid_file lock_dir=/var/lock/subsys/mysql fi if test -s "$pid_file" then mysqlmanager_pid=`cat $pid_file` echo $echo_n "Shutting down MySQL" kill $mysqlmanager_pid # mysqlmanager should remove the pid_file when it exits, so wait for it. wait_for_pid removed "$mysqlmanager_pid"; return_value=$? # delete lock for RedHat / SuSE if test -f $lock_dir then rm -f $lock_dir fi exit $return_value else log_failure_msg "MySQL manager or server PID file could not be found!" fi ;; 'restart') # Stop the service and regardless of whether it was # running or not, start it again. if $0 stop $other_args; then $0 start $other_args else log_failure_msg "Failed to stop running server, so refusing to try to start." exit 1 fi ;; 'reload'|'force-reload') if test -s "$server_pid_file" ; then read mysqld_pid < $server_pid_file kill -HUP $mysqld_pid && log_success_msg "Reloading service MySQL" touch $server_pid_file else log_failure_msg "MySQL PID file could not be found!" exit 1 fi ;; 'status') # First, check to see if pid file exists if test -s "$server_pid_file" ; then read mysqld_pid < $server_pid_file if kill -0 $mysqld_pid 2>/dev/null ; then log_success_msg "MySQL running ($mysqld_pid)" exit 0 else log_failure_msg "MySQL is not running, but PID file exists" exit 1 fi else # Try to find appropriate mysqld process mysqld_pid=`pidof $sbindir/mysqld` if test -z $mysqld_pid ; then if test "$use_mysqld_safe" = "0" ; then lockfile=/var/lock/subsys/mysqlmanager else lockfile=/var/lock/subsys/mysql fi if test -f $lockfile ; then log_failure_msg "MySQL is not running, but lock exists" exit 2 fi log_failure_msg "MySQL is not running" exit 3 else log_failure_msg "MySQL is running but PID file could not be found" exit 4 fi fi ;; *) # usage echo "Usage: $0 {start|stop|restart|reload|force-reload|status} [ MySQL server options ]" exit 1 ;; esac exit 0 An unimportant aside: The previous users of the machine kept a messy home directory. Their home directory was /root. I've pasted a copy at http://www.pastebin.ca/2167496. My question: Why is there a /etc/init.d/mysql file on this Slackware machine? How could it have gotten there? P.S. This question is far from perfect. Please feel free to edit it.

    Read the article

  • DENY select on sys.dm_db_index_physical_stats

    - by steveh99999
    Technorati Tags: security,DMV,permission,sys.dm_db_index_physical_stats I recently saw an interesting blog article by Paul Randal about the performance overhead of querying the sys.dm_db_index_physical_stats. So I was thinking, would it be possible to let non-sysadmin users query DMVs on a SQL server but stop them querying this I/O intensive DMV ? Yes it is, here’s how… 1. Create a new login for test purposes, with permissions to access AdventureWorks database only … CREATE LOGIN [test] WITH PASSWORD='xxxx', DEFAULT_DATABASE=[AdventureWorks] GO USE [AdventureWorks] GO CREATE USER [test] FOR LOGIN [test] WITH DEFAULT_SCHEMA=[dbo] GO 2.login as user test and issue command SELECT  * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'),NULL,NULL,NULL,'DETAILED') gets error :-  Msg 297, Level 16, State 12, Line 1 The user does not have permission to perform this action. 3.As a sysadmin, issue command :- USE AdventureWorks GRANT VIEW DATABASE STATE TO [test] or GRANT VIEW SERVER STATE TO [test] if all databases can be queried via DMV. 4. Try again as user test to issue command SELECT * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks '),NULL,NULL,NULL,'DETAILED') -- now produces valid results from the DMV.. 5 now create the test user in master database, public role only USE master CREATE USER [test] FOR LOGIN [test] 6 issue command :- USE master DENY SELECT ON sys.dm_db_index_physical_stats TO [test] 7 Now go back to AdventureWorks using test login and try SELECT * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks’),NULL,NULL,NULL,’DETAILED') Now gets error... Msg 229, Level 14, State 5, Line 1 The SELECT permission was denied on the object 'dm_db_index_physical_stats', database 'mssqlsystemresource', schema 'sys'. but the user is still able to query all other non-IO-intensive DMVs. If the user attempts to view the index physical stats via a builtin management studio report  – see recent blog post by Pinal Dave they get an error also

    Read the article

  • Is it OK to have multiple asserts in a single unit test?

    - by Restuta
    I think that there are some cases when multiple assertions are needed (e.g. Guard Assertion), but in general I try to avoid this. What is your opinion? Please provide a real word examples when multiple asserts are really needed. Thanks! Edit In the comment to this great post Roy Osherove pointed to the OAPT project that is designed to run each assert in a single test. This is written on projects home page: Proper unit tests should fail for exactly one reason, that’s why you should be using one assert per unit test. And also Roy wrote in comments: My guideline is usually that you test one logical CONCEPT per test. you can have multiple asserts on the same object. they will usually be the same concept being tested.

    Read the article

  • Have unit test generators helped you when working with legacy code?

    - by Duncan Bayne
    I am looking at a small (~70kLOC including generated) C# (.NET 4.0, some Silverlight) code-base that has very low test coverage. The code itself works in that it has passed user acceptance testing, but it is brittle and in some areas not very well factored. I would like to add solid unit test coverage around the legacy code using the usual suspects (NMock, NUnit, StatLight for the Silverlight bits). My normal approach is to start working through the project, unit testing & refactoring, until I am satisfied with the state of the code. I've done this many times in the past, and it's worked well. However, this time I'm thinking of using a test generator (in particular Pex) to create the test framework, then manually fleshing it out. My question is: have you used unit test generators in the past when commencing work on a legacy codebase, and if so, would you recommend them? My fear is that the generated tests will miss the semantic nuances of the code-base, leading to the dreaded situation of having tests for the sake of the coverage metric, rather than tests which clearly express the intended behaviour in code.

    Read the article

  • Event Driven Communication in Game Engine - Yes or No?

    - by Bunkai.Satori
    As I am reading book Game Coding Complete (http://www.amazon.com/Game-Coding-Complete-Third-McShaffry/dp/1584506806/ref=sr_1_1?ie=UTF8&qid=1295978774&sr=8-1), the author recommend Event Driven communication among the all game objects and modules. Basicaly, all the living game actors and object should communicate with the key modules (Physics, AI, Game Logic, Game View, etc..) via internal event messaging system. This would mean designing efficient event manager as well. My question is, whether this is proven and recommended approach. If it is not properly designed, it might mean consuming a lot of CPU cycles, which can be used elsewhere. This is especially true, if the game is targetted for mobile platform. What is your opinion and recommendation, please?

    Read the article

  • Does TDD's "Obvious Implementation" mean code first, test after?

    - by natasky
    My friend and I are relatively new TDD and have a dispute about the "Obvious Implementation" technique (from "TDD By Example" by Kent Beck). My friend says it means that if the implementation is obvious, you should go ahead and write it - before any test for that new behavior. And indeed the book says: How do you implement simple operations? Just implement them. Also: Sometimes you are sure you know how to implement an operation. Go ahead. I think what the author means is you should test first, and then "just implement" it - as opposed to the "Fake It ('Till You Make It)" and other techniques, which require smaller steps in the implementation stage. Also after these quotes the author talks about getting "red bars" (failing tests) when doing "Obvious Implementation" - how can you get a red bar without a test?. Yet I couldn't find any quote from the book saying "obvious" still means test first. What do you think? Should we test first or after when the implementation is "obvious" (according to TDD, of course)? Do you know a book or blog post saying just that?

    Read the article

  • Is it a good idea to simplify an character -driven game engine to the point it's unnecessary to learn scripting/programming ?

    - by jokoon
    I remember, and I still think, that one cannot even make a prototyped 3D game to test just simple behaviors without using gigantic tools like unity or knowing extensive C++ programming, design pattern, a decent or basic 3D engine, etc. Now I'm wondering, since I know programming, that I'm still more lucky that the ones who need to learn programming prior to know how to make something: even scripted engines such as unity are not for kids, and to my sense they tend to dictate their ways of doing things, which is not the case with engine like ogre or irrlicht. I remember toying a little with the blender game engine, it was possible to link states or something I don't remember very well. Now I'm thinking that character driven games occupies a big part of the game market. Do you think it is a good idea to make a character-controlled oriented game engine which allows only to build AI instead of anything else ?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >