Search Results

Search found 14034 results on 562 pages for 'interface inheritance'.

Page 152/562 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Assign a secondary IP address to a Windows machine using DHCP

    - by IndigoFire
    Is it possible to configure dhcpd (on a Linux box) to assign a Windows PC 2 separate IP configurations? Right now I've configured the two IP addresses manually and it does exactly what's needed, but I can't figure out how to achieve the same thing with DHCP. For example, is it possible to set up a virtual interface that piggy-backs onto the first interface and gets its own configuration? Alternatively, is it possible to run a script upon getting IP values from DHCP that would then be able to configure the secondary IP?

    Read the article

  • How to get a Cisco VPN 3000 config as text?

    - by Steven
    We would like to get a Cisco VPN device 3000 series configuration as a text file to look at the actual configuration, but apparently the interface is not a CLI but a graphical interface or menu driven. Is there a way to get access to the complete config as a text? And to copy and paste it to a text file?

    Read the article

  • Two DHCP Servers, Block Clients for one of them?

    - by Rilindo
    I am building out a kickstart network that resides on a different VLAN uses its own DHCP server. For some reason, my kickstart clients kept getting assign IPs from my primary DHCP server. The way I have it set up is that I have a primary DHCP server on this router here: 192.168.15.1 Connected to that DHCP server is a switch with the IP of 192.168.15.2. My kickstart (Scientific Linux) server is connected to that switch on two ports: Port 2 - where the kickstart server communicates to the rest of the production network via eth0. The IP assigned to the server on that interface is 192.168.15.100 (on eth0). The details are: Interface: eth0 IP: 192.168.15.100 Netmask: 255.255.255.0 Gateway: 192.168.15.1 Port 7 - has it's own VLAN ID (along with port 8). The kickstart server is connected to that port with the IP of 172.16.15.100 (on eth1). Again, the details are: Interface: eth1 IP: 172.16.15.100 Netmask: 255.255.255.0 Gateway: none The kickstart server runs its own DHCP server and assigns them over the eth1. Most of the kick starts are built over the kickstart VLAN through port 8. To prevent the kickstart DHCP server from assigning addresses over the production network, I have the route setup like so: route add -host 255.255.255.255 dev eth1 At this point, the clients kept getting assign IPs from the 192.168.15.1 DHCP server. I need to figure out a way to block client requests from reaching that DHCP. Its should be noted that but I also build KVM hosts on the kickstart server as well, so I need those KVMs to have the ability to get DHCP requests from the 192.168.15.1 DHCP server via the bridge network once I finish resolved this particular problem. (Currently, they communicate via NAT). So what would be done to resolve this? Through iptables or some sort of routing I need to put in? I tried to limited to requests via IPtables on that interface, allowing DHCP requests for 172.16.15.x network: -A INPUT -i eth1 -s 172.16.15.0/24 -p udp -m udp --dport 69 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p tcp -m tcp --dport 69 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p udp -m udp --dport 68 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p tcp -m tcp --dport 68 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p udp -m udp --dport 67 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p tcp -m tcp --dport 67 -j ACCEPT And rejects assignments on eth1 from 192.168.15.x network: -A FORWARD -o eth1 -s 192.168.15.0/24 -p udp -m udp --dport 69 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p tcp -m tcp --dport 69 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p udp -m udp --dport 68 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p tcp -m tcp --dport 68 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p udp -m udp --dport 67 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p tcp -m tcp --dport 67 -j REJECT Nope. :(

    Read the article

  • What is the correct network configuration for a devStack VM (virtualbox)?

    - by Olivier
    Usually when I setup a new Ubuntu VM, i keep the eth0 in NAT mode to get the internet & I add a eth1 interface in HostOnly mode so that I can ssh. But using this devStack guide : Running a Cloud in a VM, it looks like it tried to use eth0 as the public interface (install got stuck because eth0 lost the network). I know an OpenStack setup usually requires two NICs, so I'm wondering what is the correct configuration for my VM.

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • PerformanceCounter.NextValue hangs on some machines.

    - by Poma
    I don't know why, but many computers hangs on following operation: void Init() { net1 = new List<PerformanceCounter>(); net2 = new List<PerformanceCounter>(); foreach (string instance in new PerformanceCounterCategory("Network Interface").GetInstanceNames()) { net1.Add(new PerformanceCounter("Network Interface", "Bytes Received/sec", instance)); net2.Add(new PerformanceCounter("Network Interface", "Bytes Sent/sec", instance)); } } //Once in 1 second void UpdateStats() { Status.Text = ""; for (int i = 0; i < net1.Count; i++) Status.Text += string.Format("{0}/{1} Kb/sec; ", net1[i].NextValue() / 1024, net2[i].NextValue() / 1024); } On some computes program hangs completely on first call of UpdateStats(), others experiencing 100% CPU load but program works (slowly). Other counters like new PerformanceCounter("Processor", "% Processor Time", "_Total") seems to work fine. Any suggwstions why is that?

    Read the article

  • INetCfgComponent::RaisePropertyUi arguments

    - by Soo Wei Tan
    I'm trying to do some COM interop and attempting to invoke the INetCfgComponent::RaisePropertyUi method. I've gotten to the point where I can enumerate devices and get a valid INetCfgComponent for the adapter that I want to display the UI for. However, I'm a COM newbie (let alone COM interop) so I have no idea what the third argument in RaisePropertyUi() is meant to be. I've tried passing in the INetCfgComponent object that I have, but that just results in a InvalidCastException. MSDN has the following to say about the argument: Pointer to the IUnknown interface. RaisePropertyUi retrieves from IUnknown the interface of the context in which to display a network component's property sheet. RaisePropertyUi uses this interface to restrict the display of the property sheet to the context of a connection.

    Read the article

  • Consumed WCF service returns void although return type (& value) specified

    - by Abs
    I have a WCF service that I am attempting to connect to via a console application for testing (although will move to WPF for the final interface). I have generated the proxy, added the service reference to my project in visual studio and I can see all the methods I have created in my WCF interface: SupportStaffServiceClient client = new SupportStaffServiceClient("WSHttpBinding_ISupportStaffService"); client.myMethod(message); However when I call a method, which in the WCF interface is specified as returning a value, the method returns void in the console application. client.getMethod(message); The WCF service method is definitely returning a message, I'm just unsure as to why the client cannot "see" the return.

    Read the article

  • Verifying method with array passed by reference using Moq

    - by kaa
    Given the following interface public interface ISomething { void DoMany(string[] strs); void DoManyRef(ref string[] strs); } I would like to verify that the DoManyRef method is called, and passed any string array as the strs parameter. The following test fails: public void CanVerifyMethodsWithArrayRefParameter() { var a = new Mock<ISomething>().Object; var strs = new string[0]; a.DoManyRef(ref strs); var other = It.IsAny<string[]>(); Mock.Get(a).Verify(t => t.DoManyRef(ref other)); } While the following not requiring the array passed by reference passes: public void CanVerifyMethodsWithArrayParameter() { var a = new Mock<ISomething>().Object; a.DoMany(new[] { "a", "b" }); Mock.Get(a).Verify(t => t.DoMany(It.IsAny<string[]>())); } I am not able to change the interface to eliminate the by reference requirement.

    Read the article

  • CUDA: How to reuse kernels in multiple files (for unit testing)

    - by zenna
    How can I go about reusing the same kernel without getting fatal linking errors due to defining the symbol multiple times In Visual Studio I get "fatal error LNK1169: one or more multiply defined symbols found" My current structure is as follows: Interface.h has an extern interface to a C function: myCfunction() (ala the C++ integration SDK example) Kernel.cu contains the actual __global__ kernels and is NOT included in the build: __global__ my_kernel() Wrapper.cu inlcudes Kernel.cu and Interface.h and calls my_kernel<<<...>>> This all works fine. But if I add another C function in another file which also includes Kernel.cu and uses those kernels, I get the errors. So how can I reuse the kernels in Kernel.cu among many C functions in different files. The purpose of this by the way is unit testing, and integrating my kernels with CPP unit, if there is no way to reuse kernels (there must be!) then other suggestions for unit testing kernels within my existing CPP unit framework would be appreciate. Thanks Zenna

    Read the article

  • Hibernate MappingException

    - by Marcus
    I'm getting this Hibernate error: org.hibernate.MappingException: Could not determine type for: a.b.c.Results$BusinessDate, for columns: [org.hibernate.mapping.Column(businessDate)] The class is below. Does anyone know why I'm getting this error?? @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "businessDate" }) @XmlRootElement(name = "Results") @Entity(name = "Results") @Table(name = "RESULT") @Inheritance(strategy = InheritanceType.JOINED) @Cache(usage = CacheConcurrencyStrategy.READ_ONLY) public class Results implements Equals, HashCode { @XmlElement(name = "BusinessDate", required = true) protected Results.BusinessDate businessDate; public Results.BusinessDate getBusinessDate() { return businessDate; } public void setBusinessDate(Results.BusinessDate value) { this.businessDate = value; } @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "raw", "display" }) @Entity(name = "Results$BusinessDate") @Table(name = "BUSINESSDATE") @Inheritance(strategy = InheritanceType.JOINED) public static class BusinessDate implements Equals, HashCode { ....

    Read the article

  • One admin for multiple sites

    - by valya
    I have two sites with different SITE_IDs, but I want to have only one admin interface for both sites. I have a model, which is just an extended FlatPage: # models.py class SFlatPage(FlatPage): currobjects = CurrentSiteManager('sites') galleries = models.ManyToManyField(Gallery) # etc # admin.py class SFlatPageAdmin(FlatPageAdmin): fieldsets = None admin.site.register(SFlatPage, SFlatPageAdmin) admin.site.unregister(FlatPage) I don't know why, but there are only pages for current site in admin interface. On http://site1.com/admin/ I see flatpages for site1, on http://site2.com/admin/ I see flatpages for site2. But I want to see all pages in http://site1.com/admin/ interface! What am I doing wrong?

    Read the article

  • InvalidCastException when creating an instance using assembly.CreateInstance

    - by Yossi Dahan
    I'm looking for an explanation for the following - I have an assembly I'm loading using Assembly assembly = Assembly.LoadFrom(filename); I then loop on all the types in the assembly, and wish to try and find out if a type implements a particular interface and if so I want an instance of that type, I've tried several things which did not work, but when I fell back to the most basic (and probably inefficient) way, I realised there's something more fundamental I don't understand - foreach (Type t in assembly.GetTypes()) { foreach (Type i in t.GetInterfaces()) { if (i.FullName == pluginInterfaceType.FullName) { object o = assembly.CreateInstance(t.ToString()); IInterface plugin = (IInterface)o; That last line causes an InvalidCastException, despite the fact that the type created definitely implements that interface. Further more - if I use Activator.CreateInstance instead of Assembly.CreateInstance (which I don't want to do), casting to the interface works just fine.

    Read the article

  • Adding C++ Object to Objective-C Class

    - by Winder
    I'm trying to mix C++ and Objective-C, I've made it most of the way but would like to have a single interface class between the Objective-C and C++ code. Therefore I would like to have a persistent C++ object in the ViewController interface. This fails by forbidding the declaration of 'myCppFile' with no type: #import <UIKit/UIKit.h> #import "GLView.h" #import "myCppFile.h" @interface GLViewController : UIViewController <GLViewDelegate> { myCppFile cppobject; } @end However this works just fine in the .mm implementation file (It doesn't work because I want cppobject to persist between calls) #import "myCppFile.h" @implementation GLViewController - (void)drawView:(UIView *)theView { myCppFile cppobject; cppobject.draw(); }

    Read the article

  • Problem with persisting inteface collection at design time in winforms, .net

    - by Jules
    The easiest way to explain this problem is to show you some code: Public Interface IAmAnnoyed End Interface Public Class IAmAnnoyedCollection Inherits ObjectModel.Collection(Of IAmAnnoyed) End Class Public Class Anger Implements IAmAnnoyed End Class Public Class MyButton Inherits Button Private _Annoyance As IAmAnnoyedCollection <DesignerSerializationVisibility(DesignerSerializationVisibility.Content)> _ Public ReadOnly Property Annoyance() As IAmAnnoyedCollection Get Return _Annoyance End Get End Property Private _InternalAnger As Anger <DesignerSerializationVisibility(DesignerSerializationVisibility.Content)> _ Public ReadOnly Property InternalAnger() As Anger Get Return Me._InternalAnger End Get End Property Public Sub New() Me._Annoyance = New IAmAnnoyedCollection Me._InternalAnger = New Anger Me._Annoyance.Add(Me._InternalAnger) End Sub End Class And this is the code that the designer generates: Private Sub InitializeComponent() Dim Anger1 As Anger = New Anger Me.MyButton1 = New MyButton ' 'MyButton1 ' Me.MyButton1.Annoyance.Add(Anger1) // Should be: Me.MyButton1.Annoyance.Add(Me.MyButton1.InternalAnger) ' 'Form1 ' Me.Controls.Add(Me.MyButton1) End Sub I've added a comment to the above to show how the code should have been generated. Now, if I dispense with the interface and just have a collection of Anger, then it persists correctly. Any ideas?

    Read the article

  • Using Unity and interfaces, how do I create a concrete class that implements IDisposable

    - by Ryan ONeill
    I have an interface (IDbAccess) for a database access class so that I can unit test it using Unity. It all works fine in Unity and now I want to make the concrete database class implement IDisposable so that it closes the db connections. My problem is that Unity does not understand that my concrete class is disposable because the interface (IDbAccess) cannot implement another interface. So how can I write code like this (pseudo code) so that Unity is aware that it needs to dispose the class as soon as I am done? Using var MyDbAccessInstance = Unity.Resolve<IDbAccess> { } Thanks Ryan

    Read the article

  • Getting the battery current values for the Android Phone

    - by themangoman
    I am trying to collect power usage statistics for the Android G1 Phone. I am interested in knowing the values of Voltage and Current, and then able to collect statistics as reported in this PDF. I am able to get the value of Battery voltage through registering for an intent receiver to receive the Broadcast for ACTION_BATTERY_CHANGED. But the problem is that Android does not expose the value of current through this SDK interface. One way I tried is via sysfs interface, where I can view the battery current value from adb shell, using the following command $cat /sys/class/power_supply/battery/batt_current 449 But that too works only if the phone is connected via USB interface. If I disconnect the phone, I see the value of batt_current as '0'. I am not sure why the value of current reported is zero. It should be more than zero, right? Any suggestion / pointers for getting battery current value? Also please correct me if I am wrong.

    Read the article

  • Setting HTML Text Element value

    - by Gpx
    Hi, in my C# WPF prog i´am trying to set a value of a HTML Text Element which is defined like: <input name="tbBName" type="text" id="tbBName" tabindex="1" /> What i found about it and tried is: mshtml.HTMLDocument doc = (mshtml.HTMLDocument)webBrowser1.Document; mshtml.HTMLInputTextElement tbName = (mshtml.HTMLInputTextElement)doc.getElementsByName("tbBName"); tbName.value = "Test"; But i got the exception: Unable to cast COM object of type 'System.__ComObject' to interface type 'mshtml.HTMLInputTextElement'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{3050F520-98B5-11CF-BB82-00AA00BDCE0B}' failed due to the following error: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)). I know what it says but i dont know which object i can use to access the Texbox. Thanks for any answers.

    Read the article

  • Is it possible to access a Silverlight control via the COM automation model?

    - by dlanod
    What I'm trying to attempt is to access methods on a Silverlight control via the COM automation model. Theoretically it should be possible, as exposing the Silverlight control's methods as scriptable members exposes them through an IDispatch interface. I have been able to access the IDispatch interface through the automation model correctly but when I attempt to call a method on the exposed interface via Invoke it crashes. I was wondering if anyone knew whether this was expected behaviour, i.e. I'm violating some basic sandboxing requirement, or whether this should work and it is just something in my implementation that needs correcting? Cheers.

    Read the article

  • iPhone: Sharing protocol/delegate code

    - by pion
    I have the following code protocol snippets: @protocol FooDelegate; @interface Foo : UIViewController { id delegate; } ... @protocol FooDelegate ... // method 1 ... // method 2 ... @end Also, the following code which implements FooDelegate: @interface Bar1 : UIViewController { ... } @interface Bar2 : UITableViewController { ... } It turns out the implementation of FooDelegate is the same on both Bar1 and Bar2 classes. I currently just copy FooDelegate implementation code from Bar1 to Bar2. How do I structure/implement in such a way that Bar1 and Bar2 share the same code in a single code base (not as currently with 2 copies) since they are the same? Thanks in advance for your help.

    Read the article

  • Implicit typing of arrays that implement interfaces

    - by Sir Psycho
    Hi, I was under the impression that the C# compiler will implicitly type an array based off a type that they can all be implicitly converted to. The compiler generates No best type found for implicitly-typed array public interface ISomething {} public interface ISomething2 {} public interface ISomething3 {} public class Foo : ISomething { } public class Bar : ISomething, ISomething2 { } public class Car : ISomething, ISomething3 { } void Main() { var obj1 = new Foo(); var obj2 = new Bar(); var obj3 = new Car(); var objects= new [] { obj1, obj2, obj3 }; } I know that the way to correct this is to declare the type like: new ISomething [] { obj1, ...} But I'm after an under the covers type help here :-) Thanks

    Read the article

  • Inherit a parent class docstring as __doc__ attribute

    - by Reinout van Rees
    There is a question about Inherit docstrings in Python class inheritance, but the answers there deal with method docstrings. My question is how to inherit a docstring of a parent class as the __doc__ attribute. The usecase is that Django rest framework generates nice documentation in the html version of your API based on your view classes' docstrings. But when inheriting a base class (with a docstring) in a class without a docstring, the API doesn't show the docstring. It might very well be that sphinx and other tools do the right thing and handle the docstring inheritance for me, but django rest framework looks at the (empty) .__doc__ attribute. class ParentWithDocstring(object): """Parent docstring""" pass class SubClassWithoutDoctring(ParentWithDocstring): pass parent = ParentWithDocstring() print parent.__doc__ # Prints "Parent docstring". subclass = SubClassWithoutDoctring() print subclass.__doc__ # Prints "None" I've tried something like super(SubClassWithoutDocstring, self).__doc__, but that also only got me a None.

    Read the article

  • iphone - compiler conditional on header

    - by Mike
    I have a project that generates applications for two targets. One of the targets has to include one additional delegate protocol that should not be present on the other one. So, I have created a macro on Xcode and declared the header like this: #ifdef TARGET_1 @interface myViewController : UIViewController <UIScrollViewDelegate, UIPopoverControllerDelegate> #endif #ifdef TARGET_2 @interface myViewController : UIViewController <UIScrollViewDelegate> #endif { .... bla bla.... } The problem is that Xcode is not liking this "double" declaration of @interface and is giving me all sort of problems. How to solve that? thanks for any help.

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >