Search Results

Search found 2736 results on 110 pages for 'skeleton animation'.

Page 79/110 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Attempting to set up xampp and zend server on the same machine

    - by umbregachoong
    I am attempting to set up the zend server and xampp on the same machine but I am running into problems. I came across documentation on the zend site that said you cannot do this. However the folks over at apachefriends said you can. I have since discovered that I can run some of the zendframework examples within xampp by downloading the zendframework2 library and the skeleton app from git and I am doing this right now. However, I would like to know how to set them both up without having any conflicts both for the apache2 server and phpmyadmin. (One of the frustrating things is trying to load phpmyadmin in the deployment dialog by using the zpk tool in Zend). What I did in trying to set up both servers on windows 7 is as follows: First I have tried to set up the httpd conf files separately for each server, xampp running on port 8082 , and zend running on port 8088. At the time xampp would work, but zend server would not. This is after setting up the virtual host files separately for each server. Question 1: Where are the zend server error logs? Earlier, I was able to get both of them running configuring the xampp server httpd-conf file alone, however, I experienced problems with phpmyadmin even after configuring phpmyadmin on xampp to work on a different port other than 3306. Second question here: how to set up the two mysql phpmyadmin instances so they do not conflict with each other? Here is the xampp virtual host section: ##ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs/" ServerName localhost 8082 ##ServerAlias www.dummy-host.example.com ##ErrorLog "logs/dummy-host.example.com-error.log" ##CustomLog "logs/dummy-host.example.com-access.log" common Here is the zend virtual host section: DocumentRoot "C:\Program Files (x86)\Zend\Apache2/htdocs" ServerName localhost:8088 </VirtualHost> I have looked at this httpd.apache.org/docs/2.2/vhosts/ and this http://survivethedeepend.com/zendframeworkbook/en/1.0/creating.a.local.domain.using.apache.virtual.hosts but I am obviously doing something wrong here. I also have the java sdk running on this machine with tomcat and apache and I have no conflicts- too bad this is not the case for zend server and xampp Thanks umbre gachoong

    Read the article

  • Upstart Script on Centos 6

    - by MarcusMaximus
    I'm trying to create an upstart script to run a python script on startup. In theory it looks simple enough but I just can't seem to get it to work. I'm using a skeleton script I found here and altered. description "Used to start python script as a service" author "Me <[email protected]>" # Stanzas # # Stanzas control when and how a process is started and stopped # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn # When to start the service start on runlevel [2345] # When to stop the service stop on runlevel [016] # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background expect fork # Start the process script exec su nonrootuser -c "python /usr/local/scripts/script.py" end script The test script I want it to run is currently a simple python script that runs without any issue when run from a terminal. #!/usr/bin/python2 import os, sys, time if __name__ == "__main__": for i in range (10000): message = "shotgunUpstartTest " , i , time.asctime() , " - Username: " , os.getenv("USERNAME") #print message time.sleep(60) out = open("/var/log/scripts/scriptlogfile", "a") print >> out, message out.close() The location/var/log/scripts has permissions 777 The file /usr/local/scripts/script.py has permissions 775 The upstart script /etc/init.d/pythonupstart.conf has permissions 755

    Read the article

  • Run Python script at startup using upstart

    - by MarcusMaximus
    I'm trying to create an upstart script to run a python script on startup. In theory it looks simple enough but I just can't seem to get it to work. I'm using a skeleton script I found here and altered. description "Used to start python script as a service" author "Me <[email protected]>" # Stanzas # # Stanzas control when and how a process is started and stopped # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn # When to start the service start on runlevel [2345] # When to stop the service stop on runlevel [016] # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background expect fork # Start the process script exec python /usr/local/scripts/script.py end script The test script I want it to run is currently a simple python script that runs without any issue when run from a terminal. #!/usr/bin/python2 import os, sys, time if __name__ == "__main__": for i in range (10000): message = "UpstartTest " , i , time.asctime() , " - Username: " , os.getenv("USERNAME") #print message time.sleep(60) out = open("/var/log/scripts/scriptlogfile", "a") print >> out, message out.close() The location/var/log/scripts has permissions 777 The file /usr/local/scripts/script.py has permissions 775 The upstart script /etc/init.d/pythonupstart.conf has permissions 755

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Why doesn't this code work correctly?

    - by MisterSir
    I'm working on a website that displays galleries, using jCarousel. But no matter what I try, I can't get it to work, and I need to finish this by today. I have a very urgent schedule. My code basically takes image URLs from a database and sends them to AJAX, which passes it to jCarousel which makes the gallery. But there are a few problems: It doesn't display correctly! I can only get the last item pulled from the database, and it displays on the bottom-most row. After the item pulled from the database is displayed, the first time I click on "prev" there's no scroll effect, and the item just disappears! Only if I click on "next" 2-3 times there's a scroll effect and the item remains visible. My items are always displayed at the end of the carousel! This is urgent.. Please help me fix this. about.html: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-us"> <head> <script type="text/javascript" src="jquery-1.4.4.min.js"></script> <script type="text/javascript" src="/lib/jquery.jcarousel.min.js"></script> <link rel="stylesheet" type="text/css" href="/skins/tango/skin.css" /> <!--<style type="text/css"> #wrapper { width: 700px; margin-left: auto; margin-right: auto; } #carousel { margin-top: 120px; padding-left: 120px; } #side { padding-left: 550px; position: absolute; padding-top: 120px; } #hidden { color: #FFFFFF; } </style>--> <script type="text/javascript"> jQuery.easing['BounceEaseOut'] = function(p, t, b, c, d) { if ((t/=d) < (1/2.75)) { return c*(7.5625*t*t) + b; } else if (t < (2/2.75)) { return c*(7.5625*(t-=(1.5/2.75))*t + .75) + b; } else if (t < (2.5/2.75)) { return c*(7.5625*(t-=(2.25/2.75))*t + .9375) + b; } else { return c*(7.5625*(t-=(2.625/2.75))*t + .984375) + b; } }; function mycarousel_initCallback(carousel) { jQuery('#mycarousel-next').bind('click', function() { carousel.next(); return false; }); jQuery('#mycarousel-prev').bind('click', function() { carousel.prev(); return false; }); }; jQuery(document).ready(function() { jQuery('#mycarousel').jcarousel({ easing: 'BounceEaseOut', wrap: "first", initCallback: mycarousel_initCallback, animation: 1000, scroll: 3, visible: 3, buttonNextHTML: null, buttonPrevHTML: null }); jQuery('#mycarousel2').jcarousel({ easing: 'BounceEaseOut', animation: 1000, wrap: "first", initCallback: mycarousel_initCallback, scroll: 3, visible: 3, buttonNextHTML: null, buttonPrevHTML: null }); jQuery('#mycarousel3').jcarousel({ easing: 'BounceEaseOut', animation: 1000, scroll: 3, wrap: "first", initCallback: mycarousel_initCallback, visible: 3, buttonNextHTML: null, buttonPrevHTML: null }); }); var prevButton = null; function getObject(b, el) { var currbutton = b; var http; var url = "about.php"; var parameters = "d=carousel&cat=" + currbutton; try { http = new XMLHttpRequest(); } catch(e) { try { http = new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) { http = new ActiveXObject("Microsoft.XMLHTTP"); } } function getServer() { if (http.readyState == 4) { var i = 0; var liArr = http.responseText; var built = liArr.split(", "); var li = document.createElement("li"); var ul1 = document.getElementById("mycarousel"); var ul2 = document.getElementById("mycarousel2"); var ul3 = document.getElementById("mycarousel3"); if (el != prevButton) { prevButton = el; while (ul1.hasChildNodes() ) {ul1.removeChild(ul1.lastChild);} while (ul2.hasChildNodes() ) {ul2.removeChild(ul2.lastChild);} while (ul3.hasChildNodes() ) {ul3.removeChild(ul3.lastChild);} } else return 0; while (i < (built.length) / 3) { li.innerHTML = built[i]; ul1.appendChild(li); i++; } while (i < ((built.length) / 3)*2) { li.innerHTML = built[i]; ul2.appendChild(li); i++; } while (i < (built.length)) { li.innerHTML = built[i]; ul3.appendChild(li); i++; } } } http.open("POST", url, true); http.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); http.setRequestHeader("Content-length", parameters.length); http.setRequestHeader("Connection", "close"); http.onreadystatechange = getServer; http.send(parameters); } </script> </head> <body> <span id="hidden"> </span> <div id="wrapper"> <div id="side"> <form name="cats"> <input type="button" value="Hats" onclick="getObject('hats', this);"/><br /> <input type="button" value="Pants" onclick="getObject('pants', this);"/><br /> <input type="button" value="Shirts" onclick="getObject('shirts', this);"/><br /> </form> </div> <div id="carousel"> <ul id="mycarousel" class="jcarousel-skin-tango"> </ul> <ul id="mycarousel2" class="jcarousel-skin-tango"> </ul> <ul id="mycarousel3" class="jcarousel-skin-tango"> </ul> <input type="button" id="mycarousel-prev" value="prev" /> <input type="button" id="mycarousel-next" value="next" /> </div> </div> </body> </html> I commented the CSS because I thought it was giving me trouble, but honestly I have no idea what the hell's going on with jCarousel. about.php: <?php echo "<img width='75' height='75' src='http://static.flickr.com/66/199481236_dc98b5abb3_s.jpg' />, hi, hi, hi, hi, hi, hi, hi, hi"; ?> Also, even if there are no other items than what is displayed, I'm still able to scroll back, but not forward, assumingly because my item is always placed at the end of the carousel. I know it looks like a lot of code but it's really not! My formatting takes a lot of lines, the commented CSS takes a lot, and a lot of the code is HTML and jCarousel configuration, and there's also the BounceEasing effect which takes a few lines. There's not much actual code! So as I said, this is urgent and I need this fixed. But I can't get it to work. Please help me! Thanks for your time! EDIT: I changed the code a bit, but it still does not work. I really need help on this one!! EDIT: I added document.createElement("li"); to each while loop. Now all my items are displayed, but they are displayed vertically and not horizontally on each row. Other than that all other problems are the same. EDIT: Oh and also, in the row my image displays, only the image is there. Maybe jCarousel doesn't accept img and text, I don't know.

    Read the article

  • Disable drop shadows around windows or the menu bar on OS X

    - by Lri
    Nocturne has an option for disabling shadows around windows. But it's only available in night mode, and changing the mode (like when opening the application) causes an annoying screen flash animation. There's no way to disable the shadow under the menu bar either. MacThemes Forum / Removing the menubar dropshadow has a link to a .psd for making special desktop backgrounds that cancel out the shadow under the menu bar. But it only works if that area of the desktop picture has a low enough brightness. Some applications that cover the desktop (like DeskShade) also cover the menu bar's shadow. That's not a real solution though. Unsanity's ShadowKiller stopped working in either 10.5 or 10.6. (It does still work on 10.7.2, but the website says "NOT compatible with Mac OS X 10.6 Leopard", and I couldn't get it to work on a 10.6 installation.) Related: How do I decrease the window shadow in Mac OS X? - Super User

    Read the article

  • Suse 12.3 cannot boot after a forced shutdown

    - by David Dai
    I was doing a system update using zypper update After a while the screen was filled with this message failed to start system logging service. and the system was not responding. I had to shutdown it by holding the power button. then I started the machine again, and selected to boot suse. Then I saw the fancy boot animation(some shiny big dots gathering to the center of the screen), then the screen just turned black and the monitor sayed "no signal". then I tried to boot into suse failsafe mode, which was fine. how can I investigate into this problem?

    Read the article

  • Mac OS X: easiest (free, non-QuickTime Pro) application for converting numbered folder of images to

    - by Jared Updike
    I'd like to convert a folder of PNGs into a quicktime .mov with PNG compression (it's a folder of fractals in an animation; PNG compression works great here and the losslessness is important). What programs will do this with minimal fuss? (I don't have or want to pay for a full license of QuickTime Pro.) UPDATE: Let me make this more clear: minimal fuss means: I download some EncoderMagic.app (for example), I double click it to launch it. I select the folder with my numbered images, and out pops my movie. No mess. No resizing. ... Perhaps this doesn't exist (or is called QuickTime Pro?)

    Read the article

  • Monitor for HD video editing

    - by Kato
    I have been researching for days and nights on a good monitor to buy for a Mac Pro with an ATI Radeon 2600 XT (256mb). It will be used extensively for HD video editing (1080p) and photo editing, and likely also digital/3D animation next year(a lot of FCP + CS4). I am a student, so money is a little bit of an issue, but I want something that I'll be able to use semi-professionally after I'm done school, and am willing to finance something if it is worth the cost. I'm HOPING for something under $1000 though. The IPS Ultrasharps from Dell seem to be getting good reviews from other video editors. Accurate colour correction is a concern for me (hopefully something that covers Adobe spectrum), as well as a decent response time, HD resolutions, and DVI port. Also something with good gradient/definition in black areas, as this is difficult for editing on most LCDs. 1X1 pixel, brightness, good DVD playback etc. Hopefully this is not impossible to find for under $2000!

    Read the article

  • Can't Use Switchable Graphics

    - by Rev
    I can't get my switchable graphics cards working on my Acer 4820TG laptop. It came installed with Windows 7, but I've installed the RTM build of Windows 8. I have to keep the "Graphics" setting on the BIOS set to "Discrete," if it's set to "Switchable" the screen turns black after the boot animation is complete. The machine has an Intel graphics card and an AMD Mobility Radeon HD 5650, and I know that the Intel card is working. I've seen it work a couple of times with Windows 8 installed, but for some reason it just doesn't anymore. I know that it's not broken, because when I plug in a secondary monitor, and move the cursor to the right, I can see it on that monitor. Are there drivers or something I need to install? I've tried a number of drivers and utilities from Acer's website, none of which have fixed the problem.

    Read the article

  • Windows Vista stuck on crcdisk.sys

    - by hristo
    Hello everyone, I'm here with some Vista-trouble. After a cigarette-break I returned to my desk to find my PC with a black screen, and after failing to wake it up did a forced restart. I haven't been installing any hardware or software for quite a while. Ever since, Vista gets stuck on the animated 'Microsoft Corporation' progress bar with the animation still running and no hard disk activity. With an F8 and safe-mode + prompt I saw that the boot gets stuck at crcdisk.sys. I'm running Vista64 SP1. What could the problem be? Any advice?

    Read the article

  • Windows 7 won't restart with SSD

    - by JL
    I have this annoying problem that I just can't seem to fix. Recently I upgraded my main boot drive to an OCZ Vertex Turbo (60GB) which is an SSD. I've installed Windows 7 64bit. The drive works, and is really quick... blah blah blah. My problem is when I do a restart, for example click start, then arrow next to shutdown, and restart. The computer will shut down, and do a warm reboot, but it gets stuck where it says loading Windows. Just before the animation starts. So what I have to do is physically press the reset switch on the computer, then it restarts, firing up a menu which offers to launch Windows Startup Repair, so I select "Start Windows Normally". After this I boot in normally, but its damn annoying. Any ideas why this is happening, what I can do to fix it?

    Read the article

  • BSOD on windows 7 with SSD during boot after improper shutdown

    - by Bob
    I have a BSOD on windows 7 with SSD during boot after improper shutdown (while windows animation logo is moving). The computer restart imediatly after BSOD, and windows proposes to launch startup repair (if i do it, it takes +-5min and fixes the problem : computer starts normally). However, after any new improper shutdown, i got the same problem. Remarks: If i unplug, re-plug the SSD whyle system is shutdown, i have the same problem. If i reproduce the situation with old HDD, i havn't the problem Previously, i had a different problem: BSOD when waking up after sleep, which was fixed by installing drivers (ethernet, usb, graphic card) I have made ram chech and ssd check and found no problems Starting with safe mode after improper shutdown causes a BSOD at loading of classpnp.sys Configuration: System: HP compaq 8510p SSD: OCZ vertex-2 2.5 Boot options: SATA native mode - Enable, HDD transalation mode - LBA-assisted

    Read the article

  • Why cant i install ANY operating system??

    - by Ranhiru
    I tested the memory and i tested the hard disk and everything is fine! Mini XP booted from Hirens Boot Disk works :( I tried Windows 7, Windows XP SP3 and even Ubuntu 10.04 :( All Operating Systems boots up to the point where they can load necessary files to start the OS and then resets the laptop Windows 7, only up to the point where the Windows Loading Animation is happening Ubuntu, only up to the point where the loading is done, shows a small screen and then goes to a blinking cursor and thats it... it keeps on blinking and sometimes resets the computer Windows XP SP3, loads all the drivers and everything and then the point where i should be able install the OS, it simply resets the laptop :( I have used the word reset instead of restart because the laptop completely turns off and then only turns back in Any solutions would be greatly appreciated

    Read the article

  • Having trouble with EPSON Stylus SX130 scanner

    - by pinouchon
    I am trying to scan files with an EPSON Stylus SX130 on windows 7 x64. When i plug in the printer, windows automatically find drivers for printing, but not for scanning. So i go to the manufacturer website to download the drivers, select windows 7 64-bit and download drivers for EPSON Scan. The install works fine, but when i try to scan a file (eg: from paint or Windows fax and scan), the folowing message pops up and freezes the application : the progress bar plays the animation forever and the application does not respond. I then have no choice to kill the application with the task manager. Do you have an idea of what's going on ? How can i fix the problem, ie: how do i get the scanner actually scan files without freezing ? I tried to install the driver from the CD given in the printer package, and got the same problem. The only help i found so far (the error seems somewhat related) is this : Install a xp virtual machine and run it in there

    Read the article

  • Macbook Pro (SantaRosa) internal display not detected by graphics card, external monitor OK

    - by BLAU
    My MacBook Pro (2.4 Ghz, Santa Rosa with infamous nVidia card) acts strange. It shows the normal gray screen with Apple logo and animation flawlessly during start up but the internal display goes black without any rendering at all when all is loaded. (shining a light on display show nothing) If an external monitor is connected through the DVI port it will remain black during start up and then show the desktop as the internal display goes black. This happens both while booting to Mountain Lion and Windows XP. I have checked "About my Mac" and only the external display is listed. The same is the case if I use the nVidia Control panel in Windows XP. My questions: Is this a hardware problem or is it related to software maybe even firmware? What controls the display during start up, graphics card or something else?

    Read the article

  • Extract part of a image from a big image

    - by rajat
    I have a 6 images , and each image has a certain section that i want to save as a separate image , the problem is that it has to be accurate because i am doing some animation using the sub-image so they should exactly . so I want to accurately extract a that part from each of the 6 images , i can't do it using a image editor in which i have to make the bounding box myself because it will not be accurate , is there any program that lets me do this by like defining a box using numerical values. PS: I don't want to write matlab or opencv program for this .

    Read the article

  • Tool to make screenshot-based "screencast"

    - by liori
    Hello, I'd like to make a simple animation: some screenshots with added text. Something like screencast, but simpler, no audio... and hopefully very quick to produce--this is my main requirement. Googling for "screencasts" gives me full-blown tools to record video and audio, and I don't need them. I can make screenshots manually, then add text in GIMP... but maybe there is something easier, quicker? Very preferably something that works on Linux.

    Read the article

  • Looking for WYSIWYG tool to create and edit HTML5 based presentations (slides)

    - by peterp
    There are a lot of different implementations for HTML5 based slide presentations out there, like Google Slides or S5. But all that I have seen so far, seem to need a person being able to (and willing to) read and write HTML-Code. My company still uses Powerpoint, but some people are quite unhappy about its limitedness, e.g. the lack of possibilites to embed animation (other than just appear/disappear) without using flash. I'd love to suggest a state-of-the-art solution based on HTML5, but I don't even need to think about suggesting a solution where the project people need a techie to add or edit the content of a slide. I am not looking for an editor for non-technies to create complex HTML5/javascript based animations, of course, those should be done by a developer... basically non-technies should be capable of doing the stuff they are doing in powerpoint now. Thanks in advance for your suggestions, Peter

    Read the article

  • Windows 8 store freezes when searching for updates

    - by Roger
    I'm running Windows 8 Pro (x64) on my home desktop. Everything is working well (very very differently, but well) except for the Store application. Several days ago, I got an update notification that listed 8 application updates. I hit the link to install the updates, but only six of them installed. Now I have Updates(2) showing in the store's notification area, and whenever I try to look for the updates the Store goes into "searching" mode (the circles/dots animation) and never finishes the search. Is there any way that I can reset the Store app? I have tried re-downloading it with no success.

    Read the article

  • HDMI and DVI connected monitors not in dual view on Windows 7 (Intel onboard Gigabyte)

    - by Nux
    I have a HDMI to DVI cable which is connected to one monitor (HDMI is on PC side) and DVI-DVI connected to other monitor. This seems to work up until the system is fully loaded (the startup logo animation is shown on two screens). When and after I log in only one monitor is active (HDMI-DVI). I'm also unable to detect the other monitor (DVI-DVI). Any ideas what might be wrong with the setup? Also when I uninstall the Intel Graphics driver both screens are on, but only duplication works. GA-Z68A-D3H-B3, Intel Z68 Chipset, with fresh VGA driver (15.26.1.64.2618).

    Read the article

  • ubuntu 12.04 - keep getting "Server not found" for some websites

    - by android developer
    ever since last week , i've noticed that many websites cannot be accessed , and it doesn't matter if i use firefox or chromium as a web browser . as an example of such a website is: http://tutorials-android.blogspot.co.il/2011/05/layout-animation-in-android.html all i get is a "Server not found" error page . sometimes after a few refreshes it works just fine . i've checked it on a windows OS machine that is connected to the exact same LAN network , and the website is shown just fine . i've also checked the /etc/hosts file and it doesn't contain anything suspicious . what is going on? how can i fix it?

    Read the article

  • Mac OS X Time Machine Restore - Failure?

    - by Rabe
    I've got an late 2009 macbook pro 13' and yesterday I replaced the native hd (new hd: Hitachi; same form factor). I choosed "Restore from Time Machine" in the options menu on the snow leopard install discs and waited several hours to complete. After an reboot the mac shows up a white screen with apple logo and tiny loading animation. Nothing happens. After another restart; the same. Now I cant boot from CD using the C-Key and I have no idea how to fix that problem.

    Read the article

  • PowerPoint 2007 animated slides are only partially converted to PDF

    - by Tim
    I have recently encountered a problem with PowerPoint 2007. When I use "Save as PDF/XPS" to create a PDF version of my presentation, some slides are only partially included in the resulting PDF file. For example, this: is reduced to this: So far, I have only encountered this with slides that contain animation elements, but which part of the elements remain in the PDF version appears not to have anything to do with the order in which the animated elements appear, so that might just be a coincidence. When viewing the affected slides in Acrobat Reader, it complains about this file containing invalid elements, and that I should complain to whoever generated the PDF file... Perhaps it has something to do with the Office 2007 Service Pack 3, because these problems started only after it had been installed. Has anyone noticed something similar? Is there a workaround?

    Read the article

  • 2D animations for Android, IOS, web

    - by David Phillips
    Working on Windows 7 platform, I'm interested in creating an app that could run on Android devices (phone or tablet), IOS devices (iPhone, iPad), and the web. First target would be Android. Do you remember those old Arthur Murray footstep diagrams to teach dancing? I want to do the same thing, but animate the steps, so the user can play them all, as well as step forward or backward thru them at their own pace. Perhaps one could generate an animated GIF and then play it on the platform. So there would seem to be two parts to this: 1) A good way to generate the animation images. I can, for example, see using SketchUp - something I know - for this. But is there something better it would be worth investing in? And, 2) How to play the result, including options for playback speed control, and even forward/backward stepping? Ideally a single, or easily adaptable player to the three different OS platforms.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >