Search Results

Search found 1271 results on 51 pages for 'negative lookbehind'.

Page 9/51 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • How do i have 4GB of video memory with a 1GB video card?

    - by Tomas Spc Yaczik
    I ran the direct X diagnostic tool to take a look at my graphics capabilities and its telling me that the approximate video memory is clocked in at 4096MB. That doesn't make any sense, how is that possible? The only things that i can think of were that DirectX was inaccurate so i looked at the graphics statistics of my computer on CPU-Z which was included with the motherboard and that's telling me i have "negative" 1988MB. Not only is that not 4096MB, but its giving me a negative number and i know for a fact that you can't have a negative amount of memory???!!! The only things i can think of that would be amplifying my video memory output is the PCIe 3.0 bus, or that the motherboard is somehow including the on board video chip set with the video card which only then is only 2GB of video memory, which still has to be being amplified by something. Any suggestions?

    Read the article

  • SEO impact on subdomain for full name and obscure ccTLD

    - by Dan Christian
    There have been a few questions on subdomains and their impact on SEO, mostly in comparison to subfolders. The closest question I've found is this question but it still doesn't completely answer my query. I'm setting up a blog for 'Sam Smith'. It's imperative the SEO is based around his full name as he is a prominent blogger and his name is his value. All ccTLD variations of 'samsmith' (samsmith.com, samsmith.cc etc) are taken. However there has been the opportunity to register an obscure ccTLD for 'smith'. In regards to SEO value purely from the URL... 1) Will there be any negative SEO implications on searches for 'Sam Smith' when setting up the subdomain as 'sam.smith.' compared to a more regular 'samsmith.' domain? Will a search engine recognise the subdomain as the full name as oppose to just 'smith'? 2) Are there any negative SEO implications with an obscure ccTLD. For instance if Sam Smith was a prominent blogger in Canada with most of his audience based there, would there be any negative SEO if he had, for example, a .co ccTLD.

    Read the article

  • Algorithm to shift the car

    - by Simran kaur
    I have a track that can be divided into n number of tracks and a car as GamObject. The track has transforms such that some part of the track's width lies in negative x axis and other in positive. Requirement: One move should cross one track. On every move(left or right), I want the car to reach exact centre of the next track on either sides i.e left or right. My code: Problem: : Because of negative values , somewhere I am missing out something that is making car move not in desirable positions and that's because of negative values only. variable tracks is the number of tracks the whole track is divided in. variable dist is the total width of the complete track. On left movement: if (Input.GetKeyDown (KeyCode.LeftArrow)) { if (this.transform.position.x < r.renderer.bounds.min.x + box.size.x) { this.transform.position = new Vector3 (r.renderer.bounds.min.x + Mathf.FloorToInt(box.size.x), this.transform.position.y, this.transform.position.z); } else { int tracknumber = Mathf.RoundToInt(dist - transform.position.x)/tracks; float averagedistance = (tracknumber*(dist/tracks) + (tracknumber-1)*(dist/tracks))/2; if(transform.position.x > averagedistoftracks) { amountofmovement = amountofmovement + (transform.position.x - averagedistance); } else { amountofmovement = amountofmovement - (averagedistance - transform.position.x); } this.transform.position = new Vector3 (this.transform.position.x - amountofmovement, this.transform.position.y, this.transform.position.z); } }

    Read the article

  • DataRelation Insert and ForeignKey

    - by Steve
    Guys, I have a winforms application with two DataGridViews displaying a master-detail relationship from my Person and Address tables. Person table has a PersonID field that is auto-incrementing primary key. Address has a PersonID field that is the FK. I fill my DataTables with DataAdapter and set Person.PersonID column's AutoIncrement=true and AutoIncrementStep=-1. I can insert records in the Person DataTable from the DataGridView. The PersonID column displays unique negative values for PersonID. I update the database by calling DataAdapter.Update(PersonTable) and the negative PersonIDs are converted to positive unique values automatically by SQL Server. Here's the rub. The Address DataGridView show the address table which has a DataRelation to Person by PersonID. Inserted Person records have the temporary negative PersonID. I can now insert records into Address via DataGridView and Address.PersonID is set to the negative value from the DataRelation mapping. I call Adapter.Update(AddressTable) and the negative PersonIDs go into the Address table breaking the relationship. How do you guys handle primary/foreign keys using DataTables and master-detail DataGridViews? Thanks! Steve EDIT: After more googling, I found that SqlDataAdapter.RowUpdated event gives me what I need. I create a new command to query the last id inserted by using @@IDENTITY. It works pretty well. The DataRelation updates the Address.PersonID field for me so it's required to Update the Person table first then update the Address table. All the new records insert properly with correct ids in place! Adapter = new SqlDataAdapter(cmd); Adapter.RowUpdated += (s, e) => { if (e.StatementType != StatementType.Insert) return; //set the id for the inserted record SqlCommand c = e.Command.Connection.CreateCommand(); c.CommandText = "select @@IDENTITY id"; e.Row[0] = Convert.ToInt32( c.ExecuteScalar() ); }; Adapter.Fill(this); SqlCommandBuilder sb = new SqlCommandBuilder(Adapter); sb.GetDeleteCommand(); sb.GetUpdateCommand(); sb.GetInsertCommand(); this.Columns[0].AutoIncrement = true; this.Columns[0].AutoIncrementSeed = -1; this.Columns[0].AutoIncrementStep = -1;

    Read the article

  • Flex bug?? Get messed up stacked ColumnChart with type="100%"

    - by Nir
    I am trying to do a stacked Column chart with type="100%" and a mixture of positive and negative values. When all the values are positive, is functions well, but when negative numbers come to the game, it looks totally messed up. When I also look at Adobe documentation (look here), I see the following code for stacked column chart involving negative numbers: <?xml version="1.0"?> <!-- charts/StackedNegative.mxml --> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml"> <mx:Script><![CDATA[ import mx.collections.ArrayCollection; [Bindable] public var expenses:ArrayCollection = new ArrayCollection([ {Month:"Jan", Profit:-2000, Expenses:-1500}, {Month:"Feb", Profit:1000, Expenses:-200}, {Month:"Mar", Profit:1500, Expenses:-500} ]); ]]></mx:Script> <mx:Panel title="Column Chart"> <mx:ColumnChart id="myChart" dataProvider="{expenses}" showDataTips="true"> <mx:horizontalAxis> <mx:CategoryAxis dataProvider="{expenses}" categoryField="Month" /> </mx:horizontalAxis> <mx:series> <mx:ColumnSet type="stacked" allowNegativeForStacked="true"> <mx:series> <mx:ColumnSeries xField="Month" yField="Profit" displayName="Profit" /> <mx:ColumnSeries xField="Month" yField="Expenses" displayName="Expenses" /> </mx:series> </mx:ColumnSet> </mx:series> </mx:ColumnChart> <mx:Legend dataProvider="{myChart}"/> </mx:Panel> </mx:Application> It works fine. But try to change: <mx:ColumnSet type="stacked" allowNegativeForStacked="true"> to: <mx:ColumnSet type="100%" allowNegativeForStacked="true"> and you'll see that it doesn't on January data, where both values are negative, the chart shows as if they are positive, and on the other two where one value is positive and the other is negative, it shows only the positive part as 100%... Isn't it a Flex Bug? I have my own case with such data and it behaves wrong the same way. I'd expect that if it has 800 stacked on -200, it will show 80% up and 20% down, totalling 100%. BTW: Using Flex 4, though these are all mx components. Thanks a lot and regards from Berlin, Germany, Nir.

    Read the article

  • Am I right about the differences between Floyd-Warshall, Dijkstra's and Bellman-Ford algorithms?

    - by Programming Noob
    I've been studying the three and I'm stating my inferences from them below. Could someone tell me if I have understood them accurately enough or not? Thank you. Dijkstra's algorithm is used only when you have a single source and you want to know the smallest path from one node to another, but fails in cases like this Floyd-Warshall's algorithm is used when any of all the nodes can be a source, so you want the shortest distance to reach any destination node from any source node. This only fails when there are negative cycles (this is the most important one. I mean, this is the one I'm least sure about:) 3.Bellman-Ford is used like Dijkstra's, when there is only one source. This can handle negative weights and its working is the same as Floyd-Warshall's except for one source, right? If you need to have a look, the corresponding algorithms are (courtesy Wikipedia): Bellman-Ford: procedure BellmanFord(list vertices, list edges, vertex source) // This implementation takes in a graph, represented as lists of vertices // and edges, and modifies the vertices so that their distance and // predecessor attributes store the shortest paths. // Step 1: initialize graph for each vertex v in vertices: if v is source then v.distance := 0 else v.distance := infinity v.predecessor := null // Step 2: relax edges repeatedly for i from 1 to size(vertices)-1: for each edge uv in edges: // uv is the edge from u to v u := uv.source v := uv.destination if u.distance + uv.weight < v.distance: v.distance := u.distance + uv.weight v.predecessor := u // Step 3: check for negative-weight cycles for each edge uv in edges: u := uv.source v := uv.destination if u.distance + uv.weight < v.distance: error "Graph contains a negative-weight cycle" Dijkstra: 1 function Dijkstra(Graph, source): 2 for each vertex v in Graph: // Initializations 3 dist[v] := infinity ; // Unknown distance function from 4 // source to v 5 previous[v] := undefined ; // Previous node in optimal path 6 // from source 7 8 dist[source] := 0 ; // Distance from source to source 9 Q := the set of all nodes in Graph ; // All nodes in the graph are 10 // unoptimized - thus are in Q 11 while Q is not empty: // The main loop 12 u := vertex in Q with smallest distance in dist[] ; // Start node in first case 13 if dist[u] = infinity: 14 break ; // all remaining vertices are 15 // inaccessible from source 16 17 remove u from Q ; 18 for each neighbor v of u: // where v has not yet been 19 removed from Q. 20 alt := dist[u] + dist_between(u, v) ; 21 if alt < dist[v]: // Relax (u,v,a) 22 dist[v] := alt ; 23 previous[v] := u ; 24 decrease-key v in Q; // Reorder v in the Queue 25 return dist; Floyd-Warshall: 1 /* Assume a function edgeCost(i,j) which returns the cost of the edge from i to j 2 (infinity if there is none). 3 Also assume that n is the number of vertices and edgeCost(i,i) = 0 4 */ 5 6 int path[][]; 7 /* A 2-dimensional matrix. At each step in the algorithm, path[i][j] is the shortest path 8 from i to j using intermediate vertices (1..k-1). Each path[i][j] is initialized to 9 edgeCost(i,j). 10 */ 11 12 procedure FloydWarshall () 13 for k := 1 to n 14 for i := 1 to n 15 for j := 1 to n 16 path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );

    Read the article

  • Increasing touch surface (#wp7dev)

    - by Laurent Bugnion
    When you design for Windows Phone 7 (or for any touch device, for that matter, and most especially small screens), you need to be very careful to give enough surface to your users’ fingers. It is easy to miss a touch on such small screens, and that can be horrifyingly frustrating. This is especially true when people are on the move, and trying to hit the control while walking and holding their device in one hand, or when the device is mounted in a car and vibrating with the engine. In my experience, a touch surface should be ideally minimum 60x60 pixels to be easy to activate on the Windows Phone 7 screen (which is, as we know, 800 pixels x 480 pixels). Ideally, I try to make my touch surfaces 80x80 pixels minimum. This causes a few design challenges of course. Using transparent backgrounds However, one thing is helping us tremendously: some surfaces can be made transparent, and yet react to touch. The secret is the following: If you have a panel that has a Null background (i.e. the Background is not set at all), then the empty surface does not react to touch. If however the Background is set to the Transparent color (or any color where the Alpha channel is set to 0), then it will react to touch. Setting a transparent background is easy. For example: <Grid Background="#00000000"> </Grid> or <Grid Background="Transparent"> </Grid> In C#: var grid = new Grid { Background = new SolidColorBrush( Colors.Transparent) }; Using negative margins Having a transparent background reactive to touch is a good start, but in addition, you must make sure that the surface is big enough for my clumsy fingers. One way to achieve that is to increase the transparent, touch-reactive surface, and reposition the element using negative margins. For example, consider the following UI. I changed the transparent background of the HyperlinkButton to Red, in order to visualize the touch surface. In this figure, the Settings HyperlinkButton is 105 pixels x 31 pixels. This is wide enough, but really small in height and easy to miss. To improve this, we can use negative margins, for instance: <HyperlinkButton Content="Settings" HorizontalAlignment="Right" VerticalAlignment="Bottom" Height="60" Margin="0,0,0,-15" /> Notice the usage of negative bottom margin to bring the HyperlinkButton back at the bottom of the main Grid’s first row, where it belongs. And the result is: Notice how the touch surface is much bigger than before. This makes the HyperlinkButton easier to reach, and improves the user experience. With the background set back to normal, the UI looks exactly the same, as it should: In summary: Remember to maximize the touch surface for your controls. Plan your design in consequence by reserving enough room around each control to allow their hit surface to be expanded as shown in this article. Do not cram too many controls in one page. If REALLY needed, use an additional page (or even better: use a Pivot control with multiple pivot items) for the controls that don’t fit on the first one. This should ensure a smoother user experience and improved touch behavior. Happy coding! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • Addind the sum of numbers using a loop statement

    - by Deonna
    I need serious help diving the positive numbers and the negative numbers. I am to accumulate the total of the negative values and separately accumulate the total of the positive values. After the loop, you are then to display the sum of the negative values and the sum of the positive values. The data is suppose to look like this: -2.3 -1.9 -1.5 -1.1 -0.7 -0.3 0.1 0.5 0.9 1.3 1.7 2.1 2.5 2.9 Sum of negative values: -7.8 Sum of positive values: 12 So far I have this: int main () { int num, num2, num3, num4, num5, sum, count, sum1; int tempVariable = 0; int numCount = 100; int newlineCount = 0, newlineCount1 = 0; float numCount1 = -2.3; while (numCount <= 150) { cout << numCount << " "; numCount += 2; newlineCount ++; if(newlineCount == 6) { cout<< " " << endl; newlineCount = 0; } } **cout << "" << endl; while (numCount1 <=2.9 ) { cout << numCount1 << " "; numCount1 += 0.4; newlineCount1 ++; } while ( newlineCount1 <= 0 && newlineCount >= -2.3 ); cout << "The sum is " << newlineCount1 << endl;** return 0; }

    Read the article

  • Can anyone simplify this Algorithm for me?

    - by Titan
    Basically I just want to check if one time period overlaps with another. Null end date means till infinity. Can anyone shorten this for me as its quite hard to read at times. Cheers public class TimePeriod { public DateTime StartDate { get; set; } public DateTime? EndDate { get; set; } public bool Overlaps(TimePeriod other) { // Means it overlaps if (other.StartDate == this.StartDate || other.EndDate == this.StartDate || other.StartDate == this.EndDate || other.EndDate == this.EndDate) return true; if(this.StartDate > other.StartDate) { // Negative if (this.EndDate.HasValue) { if (this.EndDate.Value < other.StartDate) return true; if (other.EndDate.HasValue && this.EndDate.Value < other.EndDate.Value) return true; } // Negative if (other.EndDate.HasValue) { if (other.EndDate.Value > this.StartDate) return true; if (this.EndDate.HasValue && other.EndDate.Value > this.EndDate.Value) return true; } else return true; } else if(this.StartDate < other.StartDate) { // Negative if (this.EndDate.HasValue) { if (this.EndDate.Value > other.StartDate) return true; if (other.EndDate.HasValue && this.EndDate.Value > other.EndDate.Value) return true; } else return true; // Negative if (other.EndDate.HasValue) { if (other.EndDate.Value < this.StartDate) return true; if (this.EndDate.HasValue && other.EndDate.Value < this.EndDate.Value) return true; } } return false; } }

    Read the article

  • ASP.NET RangeValidator can't do even the most basic math !?!?!?

    - by marc_s
    I'm having an issue with my ASP.NET RangeValidator controls. I want to allow users to enter a discount amount, and this amount must be negative (< $0.00). I want to verify that the amount entered in a textbox is a negative value, so I have this in my page markup: <asp:TextBox ID="tbxDiscount" runat="server" /> <asp:RangeValidator ID="rvDiscount" runat="server" ControlToValidate="tbxDiscount" MinimumValue="0.0" MaximumValue="0.0" EnableClientScript="true" ErrorMessage="Please enter a negative value for a discount" /> and I attempt to set the MinimumValue dynamically in my code before the page gets rendered - to the negative equivalent of my item price. So if the item is $69, I want to set the minimum value to - $69: rvDiscount.MinimumValue = (-1.0m * Price).ToString(); Trouble is: I keep getting this error message: The maximum value 0.0 cannot be less than the minimum value -69.00 for rvDiscount WTF?!?!??! Where I come from, -69 $ IS less than $0 ...... so what's the problem? And more importantly: what is the solution to the problem??

    Read the article

  • Adding the sum of numbers using a loop statement

    - by Deonna
    I need serious help dividing the positive numbers and the negative numbers. I am to accumulate the total of the negative values and separately accumulate the total of the positive values. After the loop, you are then to display the sum of the negative values and the sum of the positive values. The data is suppose to look like this: -2.3 -1.9 -1.5 -1.1 -0.7 -0.3 0.1 0.5 0.9 1.3 1.7 2.1 2.5 2.9 Sum of negative values: -7.8 Sum of positive values: 12 So far I have this: int main () { int num, num2, num3, num4, num5, sum, count, sum1; int tempVariable = 0; int numCount = 100; int newlineCount = 0, newlineCount1 = 0; float numCount1 = -2.3; while (numCount <= 150) { cout << numCount << " "; numCount += 2; newlineCount ++; if(newlineCount == 6) { cout<< " " << endl; newlineCount = 0; } } **cout << "" << endl; while (numCount1 <=2.9 ) { cout << numCount1 << " "; numCount1 += 0.4; newlineCount1 ++; } while ( newlineCount1 <= 0 && newlineCount >= -2.3 ); cout << "The sum is " << newlineCount1 << endl;** return 0; }

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #035

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Row Overflow Data Explanation  In SQL Server 2005 one table row can contain more than one varchar(8000) fields. One more thing, the exclusions has exclusions also the limit of each individual column max width of 8000 bytes does not apply to varchar(max), nvarchar(max), varbinary(max), text, image or xml data type columns. Comparison Index Fragmentation, Index De-Fragmentation, Index Rebuild – SQL SERVER 2000 and SQL SERVER 2005 An old but like a gold article. Talks about lots of concepts related to Index and the difference from earlier version to the newer version. I strongly suggest that everyone should read this article just to understand how SQL Server has moved forward with the technology. Improvements in TempDB SQL Server 2005 had come up with quite a lots of improvements and this blog post describes them and explains the same. If you ask me what is my the most favorite article from early career. I must point out to this article as when I wrote this one I personally have learned a lot of new things. Recompile All The Stored Procedure on Specific TableI prefer to recompile all the stored procedure on the table, which has faced mass insert or update. sp_recompiles marks stored procedures to recompile when they execute next time. This blog post explains the same with the help of a script.  2008 SQLAuthority Download – SQL Server Cheatsheet You can download and print this cheat sheet and use it for your personal reference. If you have any suggestions, please let me know and I will see if I can update this SQL Server cheat sheet. Difference Between DBMS and RDBMS What is the difference between DBMS and RDBMS? DBMS – Data Base Management System RDBMS – Relational Data Base Management System or Relational DBMS High Availability – Hot Add Memory Hot Add CPU and Hot Add Memory are extremely interesting features of the SQL Server, however, personally I have not witness them heavily used. These features also have few restriction as well. I blogged about them in detail. 2009 Delete Duplicate Rows I have demonstrated in this blog post how one can identify and delete duplicate rows. Interesting Observation of Logon Trigger On All Servers – Solution The question I put forth in my previous article was – In single login why the trigger fires multiple times; it should be fired only once. I received numerous answers in thread as well as in my MVP private news group. Now, let us discuss the answer for the same. The answer is – It happens because multiple SQL Server services are running as well as intellisense is turned on. Blog post demonstrates how we can do the same with the help of SQL scripts. Management Studio New Features I have selected my favorite 5 features and blogged about it. IntelliSense for Query Editing Multi Server Query Query Editor Regions Object Explorer Enhancements Activity Monitors Maximum Number of Index per Table One of the questions I asked in my user group was – What is the maximum number of Index per table? I received lots of answers to this question but only two answers are correct. Let us now take a look at them in this blog post. 2010 Default Statistics on Column – Automatic Statistics on Column The truth is, Statistics can be in a table even though there is no Index in it. If you have the auto- create and/or auto-update Statistics feature turned on for SQL Server database, Statistics will be automatically created on the Column based on a few conditions. Please read my previously posted article, SQL SERVER – When are Statistics Updated – What triggers Statistics to Update, for the specific conditions when Statistics is updated. 2011 T-SQL Scripts to Find Maximum between Two Numbers In this blog post there are two different scripts listed which demonstrates way to find the maximum number between two numbers. I need your help, which one of the script do you think is the most accurate way to find maximum number? Find Details for Statistics of Whole Database – DMV – T-SQL Script I was recently asked is there a single script which can provide all the necessary details about statistics for any database. This question made me write following script. I was initially planning to use sp_helpstats command but I remembered that this is marked to be deprecated in future. 2012 Introduction to Function SIGN SIGN Function is very fundamental function. It will return the value 1, -1 or 0. If your value is negative it will return you negative -1 and if it is positive it will return you positive +1. Let us start with a simple small example. Template Browser – A Very Important and Useful Feature of SSMS Templates are like a quick cheat sheet or quick reference. Templates are available to create objects like databases, tables, views, indexes, stored procedures, triggers, statistics, and functions. Templates are also available for Analysis Services as well. The template scripts contain parameters to help you customize the code. You can Replace Template Parameters dialog box to insert values into the script. An invalid floating point operation occurred If you run any of the above functions they will give you an error related to invalid floating point. Honestly there is no workaround except passing the function appropriate values. SQRT of a negative number will give you result in real numbers which is not supported at this point of time as well LOG of a negative number is not possible (because logarithm is the inverse function of an exponential function and the exponential function is NEVER negative). Validating Spatial Object with IsValidDetailed Function SQL Server 2012 has introduced the new function IsValidDetailed(). This function has made my life very easy. In simple words, this function will check if the spatial object passed is valid or not. If it is valid it will give information that it is valid. If the spatial object is not valid it will return the answer that it is not valid and the reason for the same. This makes it very easy to debug the issue and make the necessary correction. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Shell Script if else

    - by user34104
    #!/bin/bash echo "Int. a number" read num1 echo "Int. another numer" read num2 if ["$num1"="$num2"]; then echo "Equals" else echo "Dif" fi if["$num1"<0]; then echo "The number $num1 is negative" else if ["$num2"<0]; then echo "The number $num2 is negative" fi # this code is not working, i've something wrong when i see if the number is < 0. thanks

    Read the article

  • Regular expressions in findstr

    - by Johannes Rössel
    I'm doing a little string validation with findstr and its /r flag to allow for regular expressions. In particular I'd like to validate integers. The regex ^[0-9][0-9]*$ worked fine for non-negative numbers but since I now support negative numbers as well I tried ^([1-9][0-9]*|0|-[1-9][0-9]*)$ for either positive or negative integers or zero. The regex works fine theoretically. I tested it in PowerShell and it matches what I want. However, with findstr /r /c:"^([1-9][0-9]*|0|-[1-9][0-9]*)$" it doesn't. While I know that findstr doesn't have the most advanced regex support (even below Notepad++ which is probably quite an achievement), I would have expected such simple expressions to work. Any ideas what I'm doing wrong here?

    Read the article

  • problem in leigeber's sorting

    - by developer
    iam using leigeber's sorting javascript to sort my data on my page......i took the js from here :- leigeber's sorting javascript Now this is shwoing highlighted sorted column and all and its all perfect.Now iam having some negative and some positive values in my data and i want to show those negative data in red color and positive in green.This whole sorting thing is using js and css to do the highlighting and sorting.Iam a bit confused now and cant make it out that how i can assign red color to the negative ones and green to the positive ones as js is returning the data as an object i.e. a complete row and i cant make 1 column's data red or green.Will it be done by js itself or with the help of stylesheet to find -ves and +ves.Im totally lost and dont know how to do it.Please help!!!! this is an example of hte data iam using this whole sorting thing on:- -Name- -Price- -loss- -Pts- abc 361.15 -5.68 -21.75 abc2 1072.35 -5.24 -59.25 abc3 512.35 5.24 28.35 abc4 335.2 -5.02 -17.7 abc5 318.6 5.01 -16.8 abc6 76.15 -4.15 3.3

    Read the article

  • Complicated MySQL query?

    - by Scott
    I have two tables: RatingsTable that contains a ratingname and a bit whether it is a positive or negative rating: Good 1 Bad 0 Fun 1 Boring 0 FeedbackTable that contains feedback on things...the person rating, the rating and the thing rated. The feedback can be determined if it's a positive or negative rating based on RatingsTable. Jim Chicken Good Jim Steak Bad Ted Waterskiing Fun Ted Hiking Fun Nancy Hiking Boring I am trying to write an efficient MySQL query for the following: On a page, I want to display the the top 'things' that have the highest proportional positive ratings. I want to be sure that the items from the feedback table are unique...meaning, that if Jim has rated Chicken Good 20 times...it should only be counted once. At some point I will want to require a minimum number of ratings (at least 10) to be counted for this page as well. I'll want to to do the same for highest proportional negative ratings, but I am sure I can tweak the one for positive accordingly.

    Read the article

  • How does SET works with property in C#?

    - by Richard77
    Hello, I'd like to know how set works in a property when it does more than just setting the value of a private member variable. Let's say I've a private member in my class (private int myInt). For instance, I can make sure that the the value returned is not negative get { if(myInt < 0) myInt = 0; return myInt; } With SET, all I can do is affecting the private variable like so set { myInt = value; } I didn't see in any book how I can do more than that. How about if I wan't to do some operation before affecting the value to myInt? Let's say: If the value is negative, affect 0 to myInt. set { //Check if the value is non-negative, otherwise affect the 0 to myInt } Thanks for helping

    Read the article

  • SQL SERVER – Fix : Error 3623 – An invalid floating point operation occurred

    - by pinaldave
    Going back in time, I always had a problem with mathematics. It was a great subject and I loved it a lot but I only mastered it after practices a lot. I learned that mathematics problems should be addressed systematically and being verbose is not a trick, I learned to solve any problem. Recently one of reader sent me an email with the title “Mathematics problem – please help!” and I was a bit scared. I was good at mathematics but not the best. When I opened the email I was relieved as it was Mathematics problem with SQL Server. My friend received following error while working with SQL Server. Msg 3623, Level 16, State 1, Line 1 An invalid floating point operation occurred. The reasons for the error is simply that invalid usage of the mathematical function is attempted. Let me give you a few examples of the same. SELECT SQRT(-5); SELECT ACOS(-3); SELECT LOG(-9); If you run any of the above functions they will give you an error related to invalid floating point. Honestly there is no workaround except passing the function appropriate values. SQRT of a negative number will give you result in real numbers which is not supported at this point of time as well LOG of a negative number is not possible (because logarithm is the inverse function of an exponential function and the exponential function is NEVER negative). When I send above reply to my friend he did understand that he was passing incorrect value to the function. As mentioned earlier the only way to fix this issue is finding incorrect value and avoid passing it to the function. Every mathematics function is different and there is not a single solution to identify erroneous value passed. If you are facing this error and not able to figure out the solution. Post a comment and I will do my best to figure out the solution. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Did you think I wasn’t going to show up?

    - by Ratman21
    Well Monday was not good for me Job wise or Dare wise. Why? The Census job ended (after only two weeks!). It seems our group was too good at working our blocks and ran out of blocks in our area to work. Out of work again! Well at lest, they gave us a full days pay for Monday. As to dare wise, “Love Is Kind”. As I said, not saying any thing negative was and is easy for me (Love Is Patient).  Kindness is Love in action, no I don’t have problem with doing this (Which is Gentleness, Helpfulness, Willingness or Initiative; well maybe little with the initiative part). It was the dare part “In Addition To Saying Nothing Negative To Your Spouse Again Today, Do At Least One Unexpected Gesture As An Act Of Kindness”. It was the finding or waiting for something I could do.   Well I will keep on trying on that but; will move on to the next day/dare “Love is not selfish”. Stay tuned.

    Read the article

  • My Latest Hare-Brained Scheme

    - by Liam McLennan
    I have not had a significant side project for a while but I have been working on a product idea. Its an analytics application that analyses twitter data and reports on market sentiment. The target market is companies who want to track trends in consumer sentiment. My idea is to teach the application to divide relevant tweets into ‘positive’ and ‘negative’ categories. If the input was the set of tweets featuring the word ‘telstra’ the application would find the following tweet:   and put it in the ‘negative’ category. Collecting data in this fashion facilitates the creation of graphs such as: which can then be correlated against events, such as a share offer or new product release. I may go ahead and build this, just because I am a programmer and it amuses me to do so. My concerns are: There  is no market for this tool There is a market, but I don’t understand it and have no way to reach it.

    Read the article

  • Car-like Physics - Basic Maths to Simulate Steering

    - by Reanimation
    As my program stands I have a cube which I can control using keyboard input. I can make it move left, right, up, down, back, fourth along the axis only. I can also rotate the cube either left or right; all the translations and rotations are implemented using glm. if (keys[VK_LEFT]) //move cube along xAxis negative { globalPos.x -= moveCube; keys[VK_RIGHT] = false; } if (keys[VK_RIGHT]) //move cube along xAxis positive { globalPos.x += moveCube; keys[VK_LEFT] = false; } if (keys[VK_UP]) //move cube along yAxis positive { globalPos.y += moveCube; keys[VK_DOWN] = false; } if (keys[VK_DOWN]) //move cube along yAxis negative { globalPos.y -= moveCube; keys[VK_UP] = false; } if (FORWARD) //W - move cube along zAxis positive { globalPos.z += moveCube; BACKWARD = false; } if (BACKWARD) //S- move cube along zAxis negative { globalPos.z -= moveCube; FORWARD = false; } if (ROT_LEFT) //rotate cube left { rotX +=0.01f; ROT_LEFT = false; } if (ROT_RIGHT) //rotate cube right { rotX -=0.01f; ROT_RIGHT = false; } I render the cube using this function which handles the shader and position on screen: void renderMovingCube(){ glUseProgram(myShader.handle()); GLuint matrixLoc4MovingCube = glGetUniformLocation(myShader.handle(), "ProjectionMatrix"); glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]); glm::mat4 viewMatrixMovingCube; viewMatrixMovingCube = glm::lookAt(camOrigin,camLookingAt,camNormalXYZ); ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos); ModelViewMatrix = glm::rotate(ModelViewMatrix,rotX, glm::vec3(0,1,0)); //manually rotate glUniformMatrix4fv(glGetUniformLocation(myShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); movingCube.render(); glUseProgram(0); } The glm::lookAt function always points to the screens centre (0,0,0). The globalPos is a glm::vec3 globalPos(0,0,0); so when the program executes, renders the cube in the centre of the screens viewing matrix; the keyboard inputs above adjust the globalPos of the moving cube. The glm::rotate is the function used to rotate manually. My question is, how can I make the cube go forwards depending on what direction the cube is facing.... ie, once I've rotated the cube a few degrees using glm, the forwards direction, relative to the cube, is no longer on the z-Axis... how can I store the forwards direction and then use that to navigate forwards no matter what way it is facing? (either using vectors that can be applied to my code or some handy maths). Thanks.

    Read the article

  • No Apologies

    I think the hardest part of writing a book, is having to accept the negative reviews on Amazon. I expect my latest book, Building Websites with DotNetNuke 5, to get about 2 stars due to the negative reviews I am expecting. Now, this book has a lot of things to offer. Ian Lackey wrote most of the book and it covers the latest version of DotNetNuke 5, inside and out. I wrote the module development chapters that cover: Creating modules using Silverlight Creating modules using Linq to SQL ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >